I have a large VB.net application that does FEM structural analysis. It requires double precision math. The application also uses DirectX for the graphics. I now know that DirectX intentionally sets the "floating-point unit” (FPU) to single precision by default when it starts. That is a big problem. I need to figure out how to start DirectX but preserve the double precision. Currently I start DirectX with the following:
Dev = New Device(0, DeviceType.Hardware, Panel2,
CreateFlags.SoftwareVertexProcessing, pParams)
I have read that using “CreateFlags.FpuPreserve” as shown below will preserve the double precision. But when I try this DirectX does not start.
Dev = New Device(0, DeviceType.Hardware, Panel2, CreateFlags.FpuPreserve, pParams)
Can anybody tell me how to do start DirectX from with VB.net and preserve the double precision?
First, to get your code to work, you need to still specify software vertex processing. In VB, this is done as Dev = New Device(0, DeviceType.Hardware, Panel2, CreateFlags.SoftwareVertexProcessing Or CreateFlags.FpuPreserve, pParams)
Once you've fixed that, you'll find it still doesn't do what you want. FpuPreserve corresponds to D3DCREATE_FPU_PRESERVE. This affects the floating-point precision calculations on the CPU, not the GPU.
To get double-precision GPU calculations, you first needs to use Direct3D 11 API or higher (it looks like use are using Direct3D 9). Even then double-precision support is an optional feature; you have to query ID3D11Device::CheckFeatureSupport with D3D11_FEATURE_DOUBLES to see if it exists.
In SharpDX it would be device.CheckFeatureSupport(Feature.ShaderDoubles), where device is of type Direct3D11.Device, or GraphicsDeviceFeatures.HasDoublePrecision if you are using SharpDX.Toolkit.Graphics.
Yes that did it thanks! The call needed the "Or CreateFlags.FpuPreserve" added to it as shown above, the "or" being critical. Double precision is then maintained in CPU / FPU calculations, which is exactly what I need. I don't know what precision is being used in the graphics calculations but I don't care about that.
I have heard that people use types such as NSInteger or CGFloat rather than int or float is because of something to do with 64bit systems. I still don't understand why that is necessary, even though I do it throughout my own code. Basically, why would I need a larger integer for a 64bit system?
People also say that it is not necessary in iOS at the moment, although it may be necessary in the future with 64bit iphones and such.
It is all explained here:
Introduction to 64-Bit Transition Guide
In the section Major 64-Bit Changes:
Data Type Size and Alignment
OS X uses two data models: ILP32 (in
which integers, long integers, and pointers are 32-bit quantities) and
LP64 (in which integers are 32-bit quantities, and long integers and
pointers are 64-bit quantities). Other types are equivalent to their
32-bit counterparts (except for size_t and a few others that are
defined based on the size of long integers or pointers).
While almost all UNIX and Linux implementations use LP64, other
operating systems use various data models. Windows, for example, uses
LLP64, in which long long variables and pointers are 64-bit
quantities, while long integers are 32-bit quantities. Cray, by
contrast, uses ILP64, in which int variables are also 64-bit
quantities.
In OS X, the default alignment used for data structure layout is
natural alignment (with a few exceptions noted below). Natural
alignment means that data elements within a structure are aligned at
intervals corresponding to the width of the underlying data type. For
example, an int variable, which is 4 bytes wide, would be aligned on a
4-byte boundary.
There is a lot more that you can read in this document. It is very well written. I strongly recommend it to you.
Essentially it boils down to this: If you use CGFloat/NSInteger/etc, Apple can make backwards-incompatible changes and you can mostly update your app by just recompiling your code. You really don't want to be going through your app, checking every use of int and double.
What backwards-incompatible changes? Plenty!
M68K to PowerPC
64-bit PowerPC
PowerPC to x86
x86-64 (I'm not sure if this came before or after iOS.)
iOS
CGFloat means "the floating-point type that CoreGraphics" uses: double on OS X and float on iOS. If you use CGFloat, your code will work on both platforms without unnecessarily losing performance (on iOS) or precision (on OS X).
NSInteger and NSUInteger are less clear-cut, but they're used approximately where you might use ssize_t or size_t in standard C. int or unsigned int simply isn't big enough on 64-bit OS X, where you might have a list with more than ~2 billion items. (The unsignedness doesn't increase it to 4 billion due to the way NSNotFound is defined.)
Wouldn't it make more sense (for example) for an int to always be 4 bytes?
How do I ensure my C programs are cross platform if variable sizes are implementation specific?
The types' sizes aren't defined by c because c code needs to be able to compile on embedded systems as well as your average x86 processor and future processors.
You can include stdint.h and then use types like:
int32_t (32 bit integer type)
and
uint32_t (32 bit unsigned integer type)
C is often used to write low-level code that's specific to the CPU architecture it runs on. The size of an int, or of a pointer type, is supposed to map to the native types supported by the processor. On a 32-bit CPU, 32-bit ints make sense, but they won't fit on the smaller CPUs which were common in the early 1970s, or on the microcomputers which followed a decade later. Nowadays, if your processor has native support for 64-bit integer types, why would you want to be limited to 32-bit ints?
Of course, this makes writing truly portable programs more challenging, as you've suggested. The best way to ensure that you don't build accidental dependencies on a particular architecture's types into your programs is to port them to a variety of architectures early on, and to build and test them on all those architectures often. Over time, you'll become familiar with the portability issues, and you'll tend to write your code more carefully.
Becoming aware that not all processors have integer types the same width as their pointer types, or that not all computers use twos-complement arithmetic for signed integers, will help you to recognize these assumptions in your own code.
You need to check the size of int in your implementation. Don't assume it is always 4 bytes. Use
sizeof(int)
I'm developing an application for a 16-bit embedded device (80251 microcontroller), and I need arbitrary precision arithmetic. Does anyone know of a library that works for the 8051 or 80251?
GMP doesn't explicitly support the 8051, and I'm wary of the problems I could run into on a 16-bit device.
Thanks
Try this one. Or, give us an idea of what you're trying to do with it; understanding the workload would help a lot. TTMath looks promising. Or, there are approximately a zillion of them listed in the Wikipedia article.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What are the advantages of using ATmega32 than other microcontrollers?
Is it better than PIC, ARM, and 8051?
Advantages
Still runs on 5 V, so legacy 5 V stuff interfaces cleaner
Even though it's 5 V capable, newer parts can run to 1.8 V. This wide range is very rare.
Nice instruction set, very good instruction throughput compared to other processors (HCS08, PIC12/16/18).
High quality GCC port (no proprietary crappy compilers!)
"PA" variants have good sleep mode capabilities, in micro-amperes.
Well rounded peripheral set
QTouch capability
Disadvantages
Still 8-bit. An ARM is a 16/32-bit workhorse, and will push a good amount more data around, at much higher clock speeds, than any 8-bit.
Cost. Can be expensive compared to HCS08 or other bargain 8-bit processors.
GCC toolchain has quirks, like the split memory model and limited 16-bit pointers.
Atmel is not the best supplier on the planet (at least they're not Maxim...)
In short, they are a very clean and easy to work with a 8-bit microcontroller.
An 8051 is legacy: the tools are passable, the architecture is bizarre (idata? xdata? non-reentrant functions in most compilers by default?).
PIC before PIC24 is also bizarre (register banking) and poor clock->instruction throughput. There is no first-class open source C compiler either.
PIC32 is competing with ARM7TDMI and ARM Cortex-M3, based on an adapted MIPS core, and has a GCC port (not main-lined).
AVR32 is competing with Cortex-M3, and offers a pretty good value, especially in the low power area.
MSP430 is the king for ultra low-power devices, and has a passable GCC port (if you're not targeting 430X).
HCS08 is very inexpensive, but poor instruction throughput. Peripherals vary quite a bit.
ARM used to be a higher cost entry point, but with the introduction of the Cortex-M3 architecture, the price has been dropping compared to an 8-bit. For example, the LPC13xx series is comparable to a ATmega32 in many ways. Luminary (TI) has quite an impressive peripheral set.
It depends.. Firstly you have to know what you want from the microprocessor.
In general:
PIC:
Old architecture. This means it's either expensive or slow
Targets only low end market (< few Mhz)
There's a lot of code written for it
ARM
Scalable
Fast/cheap
Atmega is somewhere in between
I find the PIC family (before the MIPS version) to have the most painful instruction set of all, which means assembler is the language of choice if you want to conserve space, get performance, have control, etc.
The 8051 is a little less painful, more registers, but still takes a handful of instructions to do anything useful (meaning you cannot compare these to other chips from a MHz perspective). I like AVR in many ways, they embrace the homebrew and developer community, or if not directly there is a much better family of developers out there compared to the competition. I don't like the instruction set, but it is decades ahead of the PIC and 8051. I like the MSP430 instruction set quite a bit, it is one of the best instruction sets for teaching assembler, TI is not as developer friendly though, it can be a struggle. The eZ430 was on the right path but the goodfet is better as you don't have it failing to work every other kernel version.
MSP430 and ARM have the best instructions sets as far as I am concerned which leads to both good assembler and good compiler tools. You can find commercial tools for all of the above and certainly for 8051, MSP430, and ARM free tools (MSP430 and ARM can use GCC, 8051 cannot, look for SDCC). For now mspgcc4.sf.net and CodeSourcery are the place for GCC based tools for MSP430 and ARM. LLVM supports both, I was able to get LLVM 27 to beat the latest GCC in a dhrystone test, but that is one test, LLVM trails behind in performance but is improving.
As far as finding and creating free cross compilers I see LLVM already as the easiest to get and use and going forward it will hopefully only get better. Sadly the MSP430 port for LLVM was a look what I could do in an afternoon PowerPoint presentation and not a serious port.
My answer is that it depends on what you are doing, and I recommend you try all of them. These days the evaluation boards are in the sub US$50 range and some in the sub US$30 range. Even within the ARM family (ST, Atmel, Stellaris, LPC, etc.) there is a wide veriety of features and quirks that you will only find if you try them. Avoid the LPCexpresso, mbed2, and STM32 primers. Avoid LPC in general and avoid Cortex-M3 in general until you have cut your teeth on an ARM7. Look at SparkFun for Olimex and other boards. Although it is probably LPC the ARMmite PRO and Arduino Pro are good choices. The eZ430 is a good MSP430 start, and I don't remember who is making 8051 stuff, Renasys (sp?), 8051s are not all created equal, the register space varies from one to another and you have to prepare for that. I would probably look for an 8051 simulator if you want to play with the 8051.
I see AVT and definitely ARM continuing to dominate, I would like to see the MSP430 be used for things other than just super low power. With ARM, AVR, and MSP430 you can use and get used to GCC tools now and in the future, which has a lot of benefits even if GCC isn't the best compiler in the world, it is by far the best supported compiler. I would avoid proprietary compilers and tools. I would look for devices that have non-proprietary programming interfaces that are field programmable, JTAG is good, but for example the new SWD JTAG on Cortex-M3 is bad. TI MSP was hurt by this but some hacking has resolved this, at least for now. I really don't have much good to say about PIC and won't try. A big thing to look for is glue logic, does the part or family have the SPI or I2C or whatever bus you want to use, do you need an internal pull up or wired or input?
Some chips just don't have that option and you have to add external hardware. Do you need an interrupt, with conditioning? ARM tends to win on this because it is a core used by many so each ARM vendor puts its own I/O out there so you can still live in the ARM world and have many choices, AVR and MSP are going to be very limited by comparison. With ARM the tools are going to be state of the art, ARM is the most used processor right now. AVR and MSP are special project addons, less widely supported and fragile. Although ARM is low power compared to Intel on a SBC or computer platform, it is likely not as low-power as an AVR or MSP. You really need to look at your project and pick the right processor for the job, I wouldn't and don't limit myself to one family. With as cheap as the evaluation boards are, and almost all can use free tools, it is just a matter of putting in a few nights or weekends in to learn each. I suggest learning more than one AVR, and learning more than one microprocessor.
At this end of the spectrum, there are only really two factors that make much difference. First, in smaller quantities, the only thing that matters at all is which architecture suits your development needs best. If you are already familiar with PIC, there's not much point in learning avr, or visa versa. Pick an architecture that you like, then sort through the options on that architecture to see which model is up to your particular needs.
In quantity (say, 20 or more units), you might benefit by choosing just the right platform that precisely matches your devices' needs, to keep costs as low as possible.
In general, Pic and avr platforms are good for simple, single function devices, where as arm is used in cases where you need a full OS stack like QNX or Linux for things like TCP or real time with OS services.
If you want the widest choice of peripherals, performance, price-point, software and tool support, and suppliers it would be hard to beat an ARM Cortex-M3 based part.
But addressing your question directly the whole AVR range has a consistent architecture and common peripheral set from the Tiny to the Mega (not AVR32 however which is entirely different). This is the significant difference with PIC where when moving up the range( PIC10, 12, 16, 18, 24, 32), you get different peripheral designs, different instruction sets, and need to invest in different compilers and debug hardware.
The instruction set for AVR was designed for efficient C code compilation (again unlike PIC).
8051 is an architecture originally introduced by Intel decades ago, but now used as the core for 8 bit devices from a number of vendors. It has some clever tricks such as efficient multitasking context switches via its 8 duplicated register banks, and a block of bit addressable memory, but has a quirky memory architecture and limited address range (like most 8 bit devices). Great for small well targeted devices, but not truly general purpose.
ARM Cortex-M3 essentially replaces ARM7TDMI, and is a cleaner design with well thought out architecture. It requires minimal assembler start-up code and even ISRs and vector tables can be coded in C directly without any weird compiler extensions or assembler entry/exit code. Its bitbanding technique allows all memory and peripherals to be atomically bit addressable, which is useful for fast I/O and safe multithreading. Basically it is designed to allow C or C++ code at the system level without non-standard compiler extensions. It is of course a 32 bit architecture, so does not have the resource or arithmetic limitations of 8 bit devices. Prices for low-end parts compete with higher performance 8 bit devices, and blow most 16 bit devices out of the water (making 16 bit almost obsolete).
One other key thing to remember is that PIC and AVR are from single vendors, while 8051 and ARM are licensed cores. Each licensee adds their own peripheral set, so there is no commonality between vendors on peripherals, so device driver code needs porting when switching vendors, and you need to ensure the part has the peripherals you need. If you design your device layer well, this is seldom much of a problem.
Well, it isn't easy to answer. It mostly depends on what you used before. If you are already AVR user, then it's good to use. On the other hand you can find PICs with similar capabilities, so I'd say it's mostly personal preference. I think that most ARMs are more capable than atmega32 series. If you want good advice, tell us what you plan to use it for.
AVRs have flat memory model and have free development tools and cheap development hardware is available for them.
I don't know enough about 8051 to comment.
Oh and if you're thinking about original atmega32, I'd say it's a bad idea. It's going to be deprecated soon, so you may want to consider newer models from the atmega32 series.