Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am writing a project in C++ for an embedded system with no OS support; almost no library support. Very bare-metal. Hence, a fair amount of my code is tightly coupled(e.g., software triggered interrupts and the layer directly above them).
Part of what I am doing involves changing the serial port configuration, thus driving concurrent change on the PC end (the UI end) and the microprocessor(the activity end).
I'm doing okay so far in a super-careful incremental type development(piece by piece fitting it in). However, I'd like to be more confident about my code working in an engineering sense.
What kind of methodologies/frameworks would you recommend for this kind of situation?
Edit:
I use the AMD186 ES on an ACore86 board made by Tern, Inc. Compiler: Paradigm, free edition(ships with the board). I don't have an option to change what I'm working on, unfortunately.
The lack of infrastructure in a bare metal environment is pretty challenging. I'd recommend you focus on debugging tools. Even with great care and excellent methodology, you'll need the ability to debug things.
It would behoove you to get gdbagent working. You'll need to implement this yourself, but it is a simple text-based protocol. You run gdb on an external machine and communicate with the gdbagent on your target.
You certainly could run the gdbagent protocol over the serial port, but this rapidly becomes tedious when large amounts of data need to be examined. If you have a faster interface available, take advantage of it.
I don't know what your budget is, but you should also plan for a JTAG debugger. gdbagent is great so long as your gdbagent on the target is able to run. If everything crashes hard, you're toast. JTAG debuggers are enormously expensive, but can be rented. I've used Corelis products in the past, and I've heard good things about Abatron.
I think you're best bet is to work with the vendor of your compiler to get a device simulator.
Tessy supposedly works with that chip. Check out: http://www.hitex.us/products.html?con_186.html~content
When timing is important I like to use a free I/O pin or two and a scope together to instrument the code. I'm also a fan of the JTAG port for source-level debugging. You can also have the microprocessor store a vector of data and send it back over a second uart (if you have one) to the PC for analysis.
Something that I've seen done in this sort of area is unit testing.
No, I'm not joking.
Unit tests run on the device, under the control of the host PC.
You write a wrapper to ley load programs into SRAM under unit test control.
Then your PC can send a program, run it and check the output.
If you need to exercise your board , get a labjack or similar USB interface card.
Now that's the hardware in a test jig, all run from yours host PC.
One thing I've done with some success was to design a PC environment where code can be compiled with C++ for the PC and tested, and then later compiled with "straight" C to run on the embedded system. I/O port references are #defined to be property accesses for an I/O object, which are then sent via socket to a "hardware emulation" program. Parts of the system ended up being clunkier than I would have liked, but I expect succeeding versions will be less clunky.
Related
I'm using the same code that is still working for a new version of equipment the company the bought.
I can't communicate with the equipment in vb in visual studio (the language of the last code I wrote). But I can make a simple code in LabVIEW to see if the equipment is communicating and it is.
So, my question is what code is labview sending to the equipment?
The only thing I see from the LabVIEW GUI is *IDN?\n
Is that the same as what I writing?
mySerialport.WriteLine("*idn?" + Chr(10))
*IDN?\n
is not the same as :
mySerialport.WriteLine("*idn?" + Chr(10))
The former is capitalized while the later is not and it may cause an issue depending on the instrument.
You are using the serial port, so the most important thing to consider is the baud rate. It is possible that the later model equipment your company purchased has a different baud rate to the one that was used previously.
If you want to see exactly the data that LabVIEW is sending, you can use NI IO Trace or a non NI serial port monitor such as listed here
The question isn’t what language LabVIEW uses. LabVIEW is a programming environment with library APIs to speak to lots of hardware and the ability for anyone to write code to speak to even more hardware. The question is “what language does the HARDWARE speak?” To answer that, you’d post what kind of hardware it is and probably go to the manufacturer’s website for a spec sheet.
“*IDN?/n” looks like a GPIB command, which is just a framework serial protocol. You’d need the spec sheet of the hardware to know the particular commands that your hardware understands.
PS: LabVIEW doesn’t have a GUI showing anything about hardware communication. You have an application written in LabVIEW that has a GUI that is displaying information. You can edit the program to print out more info if you want, just like you could in VB. Complaining about the “LabVIEW GUI” in this case is equivalent to complaining about the Visual Studio GUI when the problem is with the program you’ve written in VS!
"The only thing I see from the labview GUI is *IDN?/n"
That sounds more like you are using the VISA Test Panel in MAX (Measurement and Automation eXplorer). It is related to LabVIEW in that it is also a tool developed by National Instruments (NI). But it does not sound like you have actually touched LabVIEW itself.
As was already stated, *IDN?\n is a typical command that an instrument that follows the SCPI messaging standard. On a Windows system, that is usually the same as "*idn?" + Chr(13) + Chr(10).
As SeanJ pointed out *IDN?\n is not the same as "idn?". Further, make sure that space character in your calling method is visible. Sometimes machines require you to manually type "\r\n" for complete carriage return.
I am a self-taught programmer and have only delved into new areas of programming as the need arises. I have never done any network programming, everything I have written has been for a single computer. I have written a program for an old board game and it runs great. But, now I want to try to write it to run for multiple players across a local network. I have an idea of what has happen in terms of constantly checking a specified folder/file for changes. But... how do you test this without actually building/compiling the program and installing it on another computer every time you make any changes? I have tried to search various forms of what I have as the title here, but all that comes up is about testing network connections, or socket programming (would this be easier/needed) or systemfilewatcher (which may be an option too if it will run on Windows 7 and 10... but, I find nothing about testing programs to actually access the network and simulating 2 copies of the program running. Any suggestions, links, etc. would be greatly appreciated.
I think you will be disappointed in the performance of a file-based network game unless reaction or refresh time is of little consequence for your "board" game. You may also need to work out potential concurrency issues (ie, someone updating a file you've just read). If you have any desire to do other games in the future you should be using sockets (most likely UDP unless you have a good reason not to) to create a client server system.
As to your question, yes, you should be able to test it. You just need to run both a compiled exe and the source in VS debug mode, accessing the same folder on your drive. If you go with the socket-based option, you would use your PC's loopback address 127.0.0.1 (sometimes known as localhost), but the 2 different parts will need to communicate on different ports.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I develop a library that is used by other software. Typically this library ends up packaged in Debian, Fedora, etc., and its "reverse-dependencies" also end up packaged and using it.
So, I guess this makes me an "upstream maintainer." I simply use autotools to produce a tarball, and packagers then use that to produce .deb files, etc. Now, something that has bothered me for quite some time is the disconnect between maintainers and packagers. I feel like every time I do a release, even if it is simply a bugfix release, I am potentially causing headaches for everyone down the chain.
Possible problems:
I introduced a bug that wasn't caught in testing, even though I tried extensively to test various configurations -- I don't have unlimited testing resources and it is a small library so I am mostly on my own although there are 1 or 2 other interested people who help out, but generally only test on one platform.
I forgot to bump the version number, causing confusion
I did bump the version number but forgot to bump the SO version (you know, the thing that specifies API/ABI compatibility, and is independent from the software release version)
I made a small change but accidentally caused an API incompatibility without thinking (e.g. made something "const" that should have been all along, didn't realize it would break people's code)
I made a small change but accidentally caused an ABI incompatibility -- e.g., changed a constant in a header file, wasn't thinking and forgot that this would be "baked in" in software compiled against a previous version
I have done pretty much all of these things at some time or other in the past. Due to these previous mistakes, these days I probably spend more time testing than actually developing, and still end up making mistakes. The mistakes are often not that bad, after all people understand, mistakes happen, but they sometimes cause people to drop using the library, without even talking to me or communicating on the mailing list, which sucks -- if those people were so interested, it would be cool if they had helped test before I published a release -- but anyways, you get the idea.
So, rather than just compiling and running the unit tests, my testing process now involves some pretty extensive steps. In particularly, I am now using "apt-cache rdepends" to find software that uses my library, and I install it and switch the binary out to test for ABI compatibility. Then, I uninstall it, and "apt-get source" it, and compile it against the new version to test for API compatibility.
This kind of testing involves,
understanding other peoples software and figuring out where and how it exercises my code
compiling other peoples software, including figuring out their other dependencies and how to get everything working -- for large projects this can be a nightmare.
some projects using my software are actually plugins for other projects, meaning I have to additionally get the host program working
many projects using my library are GUI-oriented, so I have to navigate and learn some software I don't even know or use, and then guess when I have got it to a place where it is actually calling out to my library
my library works on Linux, Windows, and OS X, and often I don't have enough machines and operating systems around to test on. For example, a huge problem with my last release was a bug that only showed up in Linux on x86_64. I had tested on Linux i386, and OS X 64-bit, but somehow these platforms didn't show the bug, it was particular to the Linux-64-bit combination which I had neglected testing because I didn't have the right hardware and assumed I'd covered enough ground.
As you can imagine this is not a light task, and makes for huge delays before publishing a given release, delaying the dissemination of bugfixes, etc. The worst thing is that my project is not even a large library, and is a hobby project of mine, so all of this feels like huge overhead just for something I do in my spare time. I'd rather be developing features than just defending against my own potential mistakes for every little change I make. But, it currently has 42 rdepends listed in Ubuntu, to give you an idea, and I'm proud that it is useful to other people so I want to be able to develop and improve it without worrying so much about breaking things for everyone.
My question is, how can I improve the efficiency of this testing process? Are there for example any tools that will automatically compile "rdepends" packages against a new version of my library and give me a report? Or somehow download compiled binaries of rdepends and test loading them against my ABI without actually necessarily requiring me to navigate the GUI of some unknown software?
how can I improve the efficiency of this testing process?
The main problem is communication, apart the fact that you lack scripts that automate the process. You can do pre-releases of your packages, mailing the distributions that your library supports, etc. or instead of maintaining the packages yourself insert them into some mayor Distro and let some experimented maintainer do the stuff.
You can always break people stuff, just don't do it so frequently. Remember that people need stability in some certain sense so you may document very well each change so people using your library can't say you didn't tell them.
About tools... you should find your own pace. Maybe some buildbots (AFAIK some projects lend build bots), maybe script automatizing the process you build stuff, etc. etc. etc., did I said etc.? The problem is too broad and there are effectively too many solutions that makes any suggestion non-viable. You may want to check https://softwareengineering.stackexchange.com/q/150466/104338 as "some" methods but, again, you should find your own pace.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am an adept Visual Basic programmer. I wish to learn about how people program hardware. For example I have seen people create an LED watches, boxes etc. How do you achieve this? Can it be done using VB or Java? I have some experience in reading C, C++ code. I am only aware of IO in the C and C++ language.
Probably you are looking for a programmable microcontroller. If you have experience in C/C++/Java, checkout Arduino. Its chip is programmed using a C like language. This "How tos" page might help you get started. There are also some good books that will help you move forward:
Programming Interactivity.
Making Things Move DIY Mechanisms for Inventors, Hobbyists, and Artists
Wiring is a platform similar to Arduino.
Also have a look at the Forth programming language. There are lot of interesting "tiny
computers" that you can program with this rather unusual language. Here is a partial list:
Forth Inc
Greenarrays
Zilog Z8
PIC18Fxx2
Two famous Forth books:
Starting Forth
Thinking Forth (A classic in Software Engineering literature.)
how people program hardware
If by 'hardware' you mean a standalone device (an embedded system), then the process involves cross-compilation. Code for the device is written in some (high-level) language on a host PC, compiled, and converted to a form suitable for downloading onto the target device.
A cross-compiler generates executable code for a platform other than the one it is running on -- for example, an AVR cross-compiler will generate code for the AVR microcontroller, but the compiler runs on a PC. Universally, assembly and C are used, and to some extent C++, Java and Ada.
If by 'hardware' you mean some device connected to the PC via some port (serial, parallel, USB), then the programing involves interaction through that port, possibly needing a device driver as well.
Can it be done using VB or Java?
I'm not sure about VB (perhaps there are VB compilers for WinCE and its ilk). Java is used on more complex/larger embedded systems (eg. mobile phones), mainly to develop user applications for the device.
create an LED watches, boxes etc. How do you achieve this?
If you're interested in developing something like a LED watch, you need to learn how to program a microcontroller. At the least, you need two components: the microcontroller and some hardware which loads programs onto it (a programer). You may invest on a development board, or build one yourself. Naturally, you will also need the cross-compiler, and the sotware that interacts with the programer so that it can load code.
I'm partial to AVR, so I'd suggest that. Other options include PIC, some variant of 8051, PSoC1 and TI's MSP devices.
The AVR tool-chain is bundled in WinAVR, and it includes avr-gcc (cross-compiler frontend), avrdude (software that interacts with the programer hardware) and a C library (avr-libc) + a bunch of useful tools. Programing hardware can be as simple as DAPA/bsd to USB based ones (AVRISP, USBasp, Usbprog) etc.
Or, if your PC still has a parallel port, you can try to control say a set of LEDs using an application written in VB. Check http://www.lvr.com/parport.htm for details.
I would recommend starting out with something like an arduino, which is a good place to get started with programming close to the hardware. It's a prototyping board with some built-in leds and other things, depending on which model you get. You can use C/C++, or any other language which can be cross-compiled to a format which is compatible with the target hardware (ATMega microcontroller I believe on the arduino).
Check out: http://www.google.com/search?client=ubuntu&channel=fs&q=arduino&ie=utf-8&oe=utf-8
If you just want to connect something to the computer serial/parallel port and 'talk' to it, you can use most languages. In Visual Basic you'll need a dll to achieve this. Java may be able to do it too.
If you want to program a microcontroler chip, you'd best learn C, because this is the language used to program most of them, although some of them accept basic, java, processing and C++.
There are a number of basic stamps you can use to get your feet wet. parallax for example made their business on basic based embedded systems. If you want to move forward at that programming level you really need to learn C for the most coverage, and I highly recommend assembler as well, at least a few different instruction sets.
You might be interested in Gadgeteer. I got to play with a kit a few weeks ago, and it's amazing fun. You can't currently do VB, but you can do C# and VB is coming very soon.
I'm working on a software project intended for recuperating old specific hardware, mostly for non-for-profit organizations and poor schools.
I need a way to simulate old hardware so I can test the application before shipping it out.
How can I do this?
I'm not sure exactly what the question is asking for. I think you are asking for a way to emulate certain HW?
If that is the case, I've used QEMU in the past, and it has worked great. QEMU is an open source machine emulator and virtualizer.
Use virtual machines? Prepare the images reflecting (more or less) the state of the target machines (speed, hardware, etc). And use them for testing the deployment?
You might want to check out Emulator Zone, and emulator is a good google search term you might not have tried.
Well. If the old hardware is communicating via RS232, then write a class that wraps the RS232 commands and make the class inject the messages the old hardwares would respond. In your program, work against that class instead of real rs232-port and just change instance to real rs232 before shipping.
I have done this succesfully in a project and it worked out really good and its not that complicated that you can think.