Why SAP Business One SDK uses double for money? - sapb1

I always read that I should use decimal or similar for money and financial calculations (example). However as much I can see, all of the Business One SDK uses doubles? Is this okay? Am I expected to convert the double values to decimals every time I am doing calculations, than back to double If I want to set it for an API object (what I do currently)?
Note: The SQL database uses numeric(19,6) for these values.

Despite what it might look like in Visual Studio, the SAP SDK isn't really a .Net-native library - it's a .Net wrapper over a C/C++ dll using COM Interop.
I don't think that decimal is a native type in C/C++, and it's certainly not a basic/intrinsic type in .Net (unlike e.g. string, int, double, etc) - it's also slower to use because there's no intrinsic CPU instructions for Decimal operations (unlike the basic types).
Because there's no equivalent in C/C++, it would have to be translated to a type that can be understood by the underlying library - and that type would likely be double anyway.

Related

Slow data transfer for embedded Mono on Linux

I have C/C++ code that currently use MS CLI support and runs on Windows. I use embedded Mono to replace CLI to port the code for Linux.
I use C# class library. The algorithm is
Set C# class fields
Run c# class member function
Read C# fields data to C++ space.
My problem is transfer of numeric, mostly int and double) fields. Currently mono_field_set_value function takes more than 30 times slower, than field update on Windows with CLI.
I am looking for ideas what can course performance degradation and ways to improve it.
I currently have C# DLL compiled on Windows for .Net 4.7 runtime.
I also pin C# classes with mono__gchandle_new( obj, true).

How to make a normal C library work with embedded environment?

I was recently asked about how to use a C library (Cello in this case) in an embedded environment, but I'm not sure how to go about that.
Is it correct to say that if a library can be compiled in the embedded environment, it can be used?
Should I care about making the library more lightweight or something like that?
Any suggestions are appreciated.
To have it compile is the bare minimum. Notably most embedded systems are freestanding systems, such as microcontroller and RTOS applications. Compilers for freestanding systems need not provide all standard library headers, the only mandatory ones are (C17 4/6):
<float.h>, <iso646.h>, <limits.h>, <stdalign.h>, <stdarg.h>, <stdbool.h>,
<stddef.h>, <stdint.h>, <stdnoreturn.h>
In addition, the embedded system need not support floating point arithmetic. Some systems implement software floating point support, but using that is very bad practice. If your MCU does not have a FPU, you should not be using floating point arithmetic, or you picked the wrong MCU for the task, period.
"I need to represent this number with decimals internally or to the user" is not a valid reason for using floating point. Fixed point arithmetic should be used for that. You only need floating point if you are to use math libraries like math.h and more advanced math.
Traditionally, embedded system compilers have been slow to adapt the latest C standard. It's been quite a while since C11 release now though, so at the moment all useful compilers have caught up with it (C17 only contains minor changes so we can likely ignore that one). Historically, embedded compilers have been horribly bad at this though, so remain sceptical. There shouldn't be any reason to pick a compiler without C11 support for new product development.
Summary for getting the lib to compile (bare minimum):
Does the library use hosted system headers, and if so does the embedded compiler support them?
Does the library use floating point and if so does the target system have a FPU, or at least a software floating point lib?
Does the library rely on the latest C standards and if so does the embedded compiler support them?
With that out of the way, you have to consider if the library is at all written to be portable. Did they take care with things like integer types, enums and alignment? Are they using stdint.h or are they using "sloppy typing" int all over the place? Did they consider endianess? Is the lib using dynamic allocation, which is banned in most embedded systems? Is it compatible with industry standards like MISRA-C? And so on.
Then there's optimizations to consider on top of that. Optimizing code for microcontrollers is very different than optimizing code for PC CPUs.
A brief glance at the various "compiler switches" (#ifdef) present usually gives a clue of how portable the code is. Looking (very briefly) at this cello lib, they seem to have considered porting between mainstream x86 systems but that's it. You would have to rewrite pretty much the whole lib if you were to port it to an embedded system. The work effort depends on how alien the target CPU is compared to x86. Porting to a high end Cortex-A with Little Endian might not require much effort. Porting to some low-end crap MCU would require a monumental effort.
Code portability is a big topic and requires very competent C programmers. To make the very same code run on for example a x86-64 and a crappy 8-bit MCU is not a trivial task.
Professional libs like protocol stacks usually come with a system port for a specific MCU, where they have not just taken generic portability in account, but also the specific system.
Not all libraries that can be compiled, can be used in embedded environments. Libraries that use malloc and free (or their C++ counterparts) are dangerous and therefore should be handled with care. These libraries can result in undeterministic behaviour because of memory allocations failing.
It is possible that the standard C STD could be wholly compiled for embedded devices but that doesn't mean that you'll have much use for printf or scanf. So a better question before you ask if you can compile it is should you use it. Cello seems like a fun experiment but isn't a stable platform to develop something real on. It can be done though and an example of that is the Espruino.
Most of the time it is a bad idea to rewrite a library to be 'lightweight' or more importantly in an embedded environment: statically allocated. You are probably not as smart as those people or won't put in the time needed to create a complete functional embedded fork which is as stable as the original or even better. Don't be dissuaded for a fun little side project but don't depend on it for a real project.
Another problem could be that the library is too big for your microcontroller. The Atmega32a only has 32KB of programmable flash. To take a C++ example of the top of my head: boost won't fit in that space even for all the highly useable tools that it provides.

Possibility to use 'wchar_t' string pointers in LabVIEW DLLs?

I'm planning a Windows-only measurement system that will be written in C++. This system should offer a DLL based plugin system, so colleagues can create some kind of device drivers for external hardware by programming specific DLL's themselves.
There are many clever guys here having experience in NI LabVIEW and it is very likely that some of them will create those DLLs with that dev system.
From my own (not very up-to-date) experiences with LV I can remember that there was no possibility to create or consume DLLs which make use of wchar_t encoded string pointer parameters back then.
As my measurement system's API will only expose string parameters as wchar_t, would this be a problem for the LabVIEW guys or do I have to provide extra functions with string params to be called by LV DLLs (which I try to avoid)?
LabVIEW does not have good build in support for Unicode (or wchar_t) and using them in the program might be a big hassle. I think you have several options:
Rethink the use of wchar_t, for me it is surprising that for a measurement device you would necessarily need to use wchar_t. Of course that is completely depended on you system.
Write a wrapper dll for you dll for communication with LabVIEW or any language not supporting wchar_t.
Write a conversion function in Labview that retrieves the bare wchar_t array as integer array and convert them to ASCII code. Use this function after calls to your dll to go to ASCII code.
It is good that you think ahead and already try to create DLLs that LabVIEW can communicate with. I think you just need to take it a step further and go talk to the Labview Guys at you company to see what solution they prefer, that makes later integration so much easier.

Differences between VB.NET and VB

I have few questions in mind. I am new to this field of Visual Basic so don't make fun of me.
1.) What are the differences between VB.NET and VB?
2.) I need to develop basic apps for Windows.(like a notepad) Which one should I use?
3.) Is there an IDE available for both?
4.) If possible can you suggest me a good resource for learning VB or VB.NET.
Note: I know C and Java. I couldn't find a satisfactory answer anywhere.
Stackoverflow always provides the most precise answers.
1.) What are the differences between VB.NET and VB?
VB.NET is a modern, object-oriented language. VB (Classic) is its predecessor, and it's no longer actively maintained.
I don't know if that is what you are looking for, but a technical comparison can be found in Wikipedia:
Comparison of Visual Basic and Visual Basic .NET
2.) I need to develop basic apps for Windows.(like a notepad) Which one should I use?
VB.NET. However, if you already know Java, the C# syntax might be more familiar to you. From a functional point of view, VB.NET and C# are almost equivalent.
3.) Is there an IDE available for both?
VB.NET applications can be developed with Visual Studio, the most recent version is 2013.
The VB Classic IDE is unsupported as of April 8, 2008.
4.) If possible can you suggest me a good resource for learning VB or VB.NET
This is off-topic for Stack Overflow.
What is the difference between VB and VB.NET?
Now VB.NET is object-oriented language. The following are some of the differences:
Data Type Changes
The .NET platform provides Common Type System to all the supported languages. This means that all the languages must support the same data types as enforced by common language runtime. This eliminates data type incompatibilities between various languages. For example on the 32-bit Windows platform, the integer data type takes 4 bytes in languages like C++ whereas in VB it takes 2 bytes. Following are the main changes related to data types in VB.NET:
. Under .NET the integer data type in VB.NET is also 4 bytes in size.
. VB.NET has no currency data type. Instead it provides decimal as a replacement.
. VB.NET introduces a new data type called Char. The char data type takes 2 bytes and can store Unicode characters.
. VB.NET do not have Variant data type. To achieve a result similar to variant type you can use Object data type. (Since every thing in .NET including primitive data types is an object, a variable of object type can point to any data type).
. In VB.NET there is no concept of fixed length strings.
. In VB6 we used the Type keyword to declare our user-defined structures. VB.NET introduces the structure keyword for the same purpose.
for more details you can refer http://dev.fyicenter.com/Interview-Questions/dotNet-1/What_is_the_difference_between_VB_and_VB_NET_.html
and to develop windows apps , my best language is C# and you can choose VB.NET too
vb vb.net
it is interpreter based language. it is compiled language,use the cls.
not a type safe language. it is a safe type language.
backward compatible. not backward compatible.

Using ILNnumerics with c++\cli #ilnumerics

I'm looking for a numerical package that I can call from managed C++\CLI. ILNumerics is aimed at .NET development, so I would assume it can be used with / from C++\CLI.
However I cannot find any reference to this effect.
So can I use ILNumerics with C++\CLI? If so, where do I find typical resources and examples?
If not, is there a numerics package that is compatible with C++\Cli that can be called and integrated as straightforward as with C#? I have a difficult time finding one that is.
Thanks, Jan