Working on an Object-Oriented C++ library and I want to take advantage of my Intel Graphics GPU. I tried to learn some OpenCL only to find out it doesn't support C++ O.O programming style and classes. Any suggestions of libraries or frameworks that could make it easier to tap into the GPU would be really helpful.
Many thanks,
Amine
Yes, OpenCL (and CUDA) are specialized programming languages, because GPGPUs are very specialized heterogeneous computing devices.
You might perhaps be interested in semi-automatic translation of some subset of C code (or Fortran) to these, but there is No Silver Bullet. Either use a library (e.g. TensorFlow) ported to GPGPUs or accept the idea of difficult development efforts specific to your GPGPU hardware.
Look into OpenACC, TransPyle, Numba, VEXCL, etc.
Read also this about SYCL.
Related
I come from a programming background and not messed around too much with hardware or firmware (at most a bit electronics and Arduino).
What is the motivation in using hardware description languages (HDL) such as Verilog and VHDL over programming languages like C or some Assembly?
Is this issue at all a matter of choice?
I read that hardware, which its firmware is written in an HDL, has a clear advantage in running instructions in parallel. However, I was surprised to see discussions expressing doubts whether to write firmware in C or Assembly (how is Assembly appropriate if you don't necessarily have a CPU?) but I concluded it's also an option.
Therefore, I have a few questions (don't hesitate to explain anything):
A firmware really can be written either in HDL or in a software programming language, or it's just another way to perform the same mission? I'd love to real-world examples. What constraints resulting from each option?
I know that a common use of firmware over software is in hardware accelerators (such as GPUs, network adapters, SSL accelerators, etc). As I understand it, this acceleration is not always necessary, but only recommended (for example, in the case of SSL and acceleration of complex algorithms). Can one choose between firmware and software in all cases? If not, I'd be happy to cases in which firmware is clearly and unequivocally appropriate.
I've read that the firmware mostly burned on ROM or flash. How it is represented in there? In bits, like software? If so, what's the profound difference? Is it the availability of adapted circuits in the case of firmware?
I guess I made a mistake here and there in some assumptions, please forgive me. Thank you!
The term "firmware" is at best ill defined, and I believe that is probably the cause of your confusion.
Historicallym - before the availability of programmable logic devices - the term "firmware" has been used to refer to code stored-in and executed-from read-only memory (ROM). At a time when the only available ROM technology was mask-ROM where the code was burned into the device at manufacture of the silicon and therefore unchangeable without replacing the chip - that was pretty "firm". Even with later programmable read-only memory (PROM), which could be programmed post-manufacture, because it was one-time programmable (OTP), the term still applied.
With the introduction of UV erasable EEPROM, firmware became perhaps less "firm", but the lack of in-circuit programmability and the need to expose the device to UV to erase it still made replacement of the embedded software a chore - normally requiring removal of the chip, placing it in the eraser for an hour or so, then programming it in a dedicated programmer.
The advent of NOR Flash memory, where code could be stored and executed directly from the device, but also readily changed in-circuit, the term firmware in this context has become less common. However it is still used (perhaps mainly by older practitioners) to refer to embedded software stored and executed from a random-access, read-only memory device as opposed to loaded into RAM from a file system.
The use for the term firmware to refer to programmable logic configuration is newer and has probably come about simply because it is hardware, but the configuration is written much like software using a high-level language.
The upshot of this is that you do not choose
"Verilog and VHDL over programming languages like C or some Assembly"
because in each context the term firmware simply refers to a different concept.
It would be best to avoid the term firmware altogether as it means different things to different people or in different contexts.
There is perhaps some further confusion form the fact that some hardware description languages are based on software development languages - such as Handle C, which is a C-like hardware description language.
This question would not have much arise some time ago, but with current platforms you can now translate between C and HDL languages (instead of using Handel-C extension of C, from the 90's), mainly between C and behavioral VHDL. And a lot of newer tools provided by enterprises, like Xilinx Electronic System Level Design Ecosystem, or Impulse-C (http://www.impulseaccelerated.com/products_universal.htm)
It is important to know though that as C is a middle level language, and VHDL is, as stated a Hrdware Description Language. C can only handle sequential instructions while VHDL allows both sequential and concurrent executions.
And even though a C program can be successfully written with pure logical or algorithmic thinking,
a successful VHDL programmer needs thorough working knowledge of the hardware circuits. being able to predict how a given code will be implemented in hardware.
In both languages you care about the resource usage, but in a different way (Unless you are programming for resource-constrained devices). But when it comes to VHDL, apart from the memory, other logic elements are limited in a FPGA (where you normally put the VHDL code in).
any suggestion what kind of Java processor to choose for studying and learning purposes?
I have read something about picoJava and its speed. DO you think it can be used for learning and practicing Java for processors?
Thank you
It's not clear to me if you are asking about learning the Java programming language, i.e., software, or learning about the processor, i.e., hardware, which executes instructions built in a programming language.
The Java programming language is available on many different platforms (including native processors like picoJava) via the Java virtual machine (JVM). If you're interested in learning the Java programming language, choose a JVM for your favorite OS; most are free, they perform well, and have lots of good documentation and samples. For example, try downloading the Java 7 SDK from Oracle - you'll be able to write simple programs, compile them, and run them on a JVM, on your existing platform.
If you are interested in learning about processors, there are a number of simple processors available, including some specifically tailored for students at a modest cost. You can also study the processor on your own platform - some make documentation generally available, though it's generally not as accessible as programming language documentation, unless it's tailored for students, e.g., A Simple and Affordable TTL Processor for the Classroom, or a programmable interrupt controller.
You can also study the Java byte-code instruction set, i.e., the instruction set of the Java virtual machine.
You can, of course, learn the Java programming language, Java byte-code, and general processor principles at the same time, but you may find it easier to tackle these topics separately; each is vast.
I came across an interesting blog post talking about some kind of superb technique to speed up processing by "vectorizing code". It's very scientific.
He's using something called SSE2 and also talks about SPU, and now I'm curious how this can be brought to digital signal processing on the iPhone.
Although this seems to be something I must deal with in the future, I wonder what the alternatives are. Some people told me it is possible to perform massive-parallel calculations on the GPU.
What options do we have to speed things up like this or even better? What frameworks and technologies are available?
The ARM CPUs on newer iOS devices have Neon SIMD, which is somewhat similar to SSE on x86 or AltiVec on PowerPC.
You might want to look at Apple's Accelerate framework which started out on Mac OS X but which is now also available on iOS 4.0 and later - this contains of a lot of useful routines which have been vectorized.
Alternatively you can try writing your own Neon SIMD routines, although this is not for the faint-hearted.
I work for a company that created firmware for several device using FreeRTOS. Lately our request for new features has surpassed how much work our firmware engineers are capable of, but we can't afford to hire anyone new right now either. Making even tiny changes requires firmware people to go in and modify things at a very low level.
I've been looking for some sort of interpreted language project for FreeRTOS that would let us implement new features at a higher level. Ideally I would like to get things eventually so the devices become closer to generic computers with us writing drivers, rather than us having to implement every feature ourselves.
Are there any FreeRTOS projects that interpret java, python or similar bytecode?
I've looked on google, but since I'm not a firmware engineer myself I'm not sure if I'm looking for the right keywords.
Thanks everyone
I don't think the RTOS, or even the OS, matters too much here if the code is portable. Depending on your input & output scheme, you'll probably need to do a little porting.
Regarding embeddable scripting languages, the 2 I'm familiar with are LUA and PAWN.
I think there are versions of Python & other such languages ported to embedded systems, but they tend to be the embedded Linux variety. Depending on your platform (no idea if it's a little MCU with 8K ROM or an embedded PC) that might be an option.
There are no interpreted languages out there that are "made" to use FreeRTOS, or any other microcontroller threading library (loosely called an 'RTOS' within the e2e community).
However, languages which I have first hand experience using in embedded systems that are (a) written in C, and (b) small enough to embedded in a microcontroller include:
LUA (suitable for almost anything, even some PICs)
Python (suitable for most ARM architectures, anyways, with more than 1mb ram)
I do not have first-hand experience with it, but Ruby may be as easy to embed as Python.
Instead of looking for FreeRTOS-specific interpreters, you might try looking for any interpreters for your particular microcontroller, or microcontroller in general. It might be possible to interface them with FreeRTOS or turn the interpreter into a task.
There seems to be someone trying to go for Lua on FreeRTOS (pic32).
I guess your question boils down ultimately to finding ways of increasing the level of abstraction above the low-level RTOS mechanisms. While it is perhaps true that interpreted languages work at somewhat higher level of abstraction than C, you can do much better than that by applying methods based on event-driven frameworks and state machines. Such event-driven frameworks have been around for decades and have been proven in countless embedded systems in all sorts of domains. Today, virtually every modeling tool for embedded systems capable of code-generation (e.g., Rational-Rose RT, Rhapsody, etc.) contains a variant of such a state-machine framework.
But event-driven, state-machine frameworks can be used also without big tools. The QP state machine frameworks (state-machine.com), for example, do everything that a conventional RTOS can do, only more efficiently, plus many things that an RTOS can't.
When you start using modern event-driven programming paradigm with state machines, your problems will change. You will no longer struggle with 15 levels of convoluted if-else statements, and you will stop worrying about semaphores or other such low-level RTOS mechanisms. Instead, you'll start thinking at a higher level of abstraction about state machines and events exchanged among them. After you experience this quantum leap, you will never want to go back to the raw RTOS and the spaghetti code.
I am a self-taught embedded developer. I mostly use AVRs programmed in C and ASM, but I have dabbled with other systems. I am looking to move onto more complex devices like CPLDs and FPGAs, but I have no idea where to start. So my one and a half questions are:
Do you prefer VHDL or Verilog and why?
What is a good way for one with no prior experience in HDLs get started in learning such a beast?
Buy a cheap starter kit from Xilinx or Altera (the two big FPGA players). A Xilinx Spartan3 starter kit is $200.
I personally prefer VHDL. It is strongly typed and has more advanced features than Verilog. VHDL is more popular in Europe and Verilog is dominating in the US.
Buy a book (e.g. Peter Ashendens The Designers Guide to VHDL) and start simulating your designs in a free simulator. ModelSim from Mentor Graphis is a good one and there are free versions available (with crippled simulation speed).
Make up some interesting project (mini cpu, vga graphics, synthesizer) and start designing. Always simulate and make sure your design works before putting your design into the hardware ...
If you have no background in digital electronics buy a book in that subject as well.
Back in the day when I worked on ASIC design, it was in verilog. In many cases as a designer you don't get to choose: the ASIC synthesis tools for an HDL cost a substantial amount of money, and companies only purchase the full toolchain for one "blessed" language. My employer had standardized on verilog, so that is what we used.
FPGA synthesis tools are substantially cheaper, so you have more freedom as an FPGA designer to pick your favored language and tools.
There are also free verilog simulators available at verilog.net.
As #kris mentioned, an FPGA starter board is also a good way to go. Having your verilog code light up an LED on a board is infinitely more satisfying than a simulator waveform on the screen.
Also check out opencores.org - There are some articles and a lot of open source code in both Verilog and VHDL you can learn from.
As far as I can tell, VHDL vs Verilog gets just as religious as Ruby vs Python or Java vs C#. Different people have their own favourites.
Check out this site:
http://www.fpga4fun.com/
Nice simple projects using simple tools. I used one of these boards a few years ago to build a small VGA display system for use as a notice board.
Looking at the site again I'm thinking of getting a Xylo-LM board as it has an ARM processor as well as SDRAM and a Xilinx Spartan 3e.
Another board I used before was the XPort 2 from Charmed Labs. This plugs into a Gameboy Advance which is well supported with open source development tools.
Check out:
http://www.charmedlabs.com/index.php?option=com_virtuemart&page=shop.browse&category_id=6&Itemid=43
One additional thing to think about is whether you should start by learning an HDL, or by learning boolean logic, Karnaugh maps, DeMorgan's theorem, gates, implementing arithmetic in gates, etc. It's easy to write non-synthesizable HDL if you don't have an accurate mental model of what the underlying hardware will look like.
This book is the Verilog version of the one I used in undergrad, and it did a pretty good job in my opinion. It starts you out with the material mentioned above, as well as some basic, basic info on the transistor-level implementation of gates, then introduces you to an HDL, and has you build progressively more complex structural and behavioral hardware blocks. Yes, I know it's ungodly expensive, as are most college textbooks, but this is one of those things for which the information I've been able to find online, at least, has been woefully inadequate.
Once you're ready to choose an HDL, I heartily recommend Verilog (having learned VHDL first). Yes, VHDL was once much more feature-rich than Verilog, but later revisions of the language (Verilog 2001, Verilog 2005, SystemVerilog, etc..) have cherry-picked most of the interesting features, and there is far more robust toolchain support for Verilog and its variant these days, in addition to it being the dominant language in use in the US (in my experience, VHDL is only used here when dealing with extreme legacy blocks, and in academic contexts, partially due to the tools support mentioned previously). Finally, once you've learned the HDL, you have a hardware verification language (HVL) in SystemVerilog with strict-superset syntax, saving you a good bit of the learning curve. Not so for VHDL, to my knowledge.
Altera and Xilinx have simulators build into their free tool sets. They are limited versions of the very popular Mentor ModelSim tools. They will handle the size of designs you are likely to get to fit in a < $500 (US) board.
For HDL choice Verilog is to C as VHDL is to ADA. So Verilog is easier to get started with, but you can make mistakes more easily. Check your simulation and compilation warnings to avoid those problems.
Verilog 2.http://www.opensparc.net/
HTH
Verilog is much easier to learn and simpler syntax. Its also a newer language. Secondly, most people use verilog. VHDL has many datatypes which give it a learning curve. Once you know verilog it will be easier to bridge the gap to VHDL. Oh and theres also macros in verilog which are very neet. I invented a language with it. Finally, you will eventually be able to do mixed language HW design. I started out with VHDL, then learned verilog and am now pro verilog.
I was in the same boat as you are now a semester ago. My preferred book was this one, since it talked about FPGAs by reviewing digital logic. It also shows side-by-side comparisons of VHDL and Verilog code so that, instead of choosing one that people may push you to, you can learn the one that you like stylistically.
As for an FPGA itself, use Xilinx's ISE webpack to do your programming (it's free), and start off with FPGAs like the Basys2 FPGA board. It's a very small FPGA that should get you started for a small price, but has the added advantage that you learn resource and memory management very early. You can use Digilent's Adept (also free) to make life easy in uploading your "compiled" code to the board.
Good luck!
Before plunging into Verilog/VHDL or buying an FPGA dev kit I'd recommend taking an introductory class on digital design. There are good online OpenCourseWare MIT classes.
Good luck.