Tips on Using Bison --graph=[file] on Linux - grammar

Recently (about a month ago) I was trying to introduce new constructs to my company's in-house extension language, and struggling with a couple of reduce-reduce errors. While I eventually solved this problem, digging into the y.output file was no picnic.
As an experiment, I tried using Bison's --graph=<file> option to output a DOT file (note that our standard build uses Byacc, not Bison). As I'm on a 'turnkey' Linux box, I didn't have a Graphviz installation and could not easily install from RPMs (working on Red Hat Enterprise Linux 4). Instead, I built it from source.
As an initial experiment, I tried to run dotty with an output of Postscript. Now our internal language is your average home-grown, Turing-complete, dynamically typed scripting language, but I was unprepared for what followed. The dotty run took over four hours (2GHz dual core AMD64 box)! And when it was done, the graph that was rendered was not what I would call readable.
So, quite simply, I'm looking for advice. Are there a set of switches which would improve the outcome over the 'default' approach I took? I'm looking for experience in
optimizing 'render' time
improving readability of the graph
possible advice on better graphical viewers

I imagine you've already seen this link, but just for completeness, there is a list of viewers etc. at: http://graphviz.org/resources/ or see https://web.archive.org/web/20131005020548/http://graphviz.org/Resources.php for an archived copy.

Related

Portable Executable DLL file and binary date format

I have got a PE executable file *.exe (32-bit), which is an small application (2.6Mb) to update firmware software of TV device. However, the update mechanism was only available up to 2013-03-12. I want to hack this executable just for pleasure. I'm trying to find this expiration date in file hexdump using PE Explorer, and replace it by some date in future to make this program work.
I found this article about binary date format:
binary date format
I am trying to find something like this value:
2013-03-xx: 0x713xxxxx
Is this a good approach to solve my task? Any suggestions? Do you know any others tools for hexdump that may be useful?
Best regard,
WP
There are likely a lot of values of the form 0x713xxxxx -- 2.6 MB might be larger than you've thought when you start looking through it more or less at random (you don't actually know that the application uses this date format internally).
The conventional approach to deal with this sort of problem is to use a tool to step through the program, examining the code that is executing, until you find the point where the check occurs. Then simply disable the check so that it always fails -- by altering the date, or simply by altering the code.
A popular tool for stepping through code that you do not control is the Interactive Dissassembler, IDA. You can download a freeware version of it here: https://www.hex-rays.com/products/ida/support/download_freeware.shtml
It might be harder than you think to do what you want, but you'll almost certainly learn a lot by trying.
Be aware of the legal issues you may be getting yourself into by making modifications to someone else's binaries, particularly if you distribute them afterwards.
dumpbin is a good PE parser (but if I were you, I won't do such kind of time stamp hacks :))

Porting newlib to a custom ARM setup

this is my first post, and it covers something which I've been trying to get working on and off for about a year now.
Essentially it boils down to the following: I have a copy of newlib which I'm trying to get working on an LPC2388 (an ARM7TDMI from NXP). This is on a linux box using arm-elf-gcc
The question I have is that I've been looking at a lot of the tutorials talking about porting newlib, and they all talk about the stubs (like exit, open, read/write, sbrk), and I have a pretty good idea of how to implement all of these functions. But where should I put them?
I have the newlib distribution from sources.redhat.com/pub/newlib/newlib-1.18.0.tar.gz and after poking around I found "syscalls.c" (in newlib-1.18.0/newlib/libc/sys/arm) which contains all of the stubs which I have to update, but they're all filled in with rather finished looking code (which does NOT seem to work without the crt0.S, which itself does not work with my chip).
Should I just be wiping out those functions myself, and re-writing them? Or should I write them somewhere else. Should I make a whole new folder in newlib/libc/sys with the name of my "architecture" and change the target to match?
I'm also curious if there's proper etiquette on distribution of something like this after releasing it as an open source project. I currently have a script which downloads binutils, arm-elf-gcc, newlib, and gdb, and compiles them. If I am modifying files which are in the newlib directory, should I hand a patch which my script auto-applies? Or should I add the modified newlib to the repository?
Thanks for bothering to read! Following this is a more detailed breakdown of what I'm doing.
For those who want/need more info about my setup:
I'm building a ARM videogame console based loosely on the Uzebox project ( http://belogic.com/uzebox/ ).
I've been doing all sorts of things pulling from a lot of different resources as I try and figure it out. You can read about the start of my adventures here (sparkfun forums, no one responds as I figure it out on my own): forum.sparkfun.com/viewtopic.php?f=11&t=22072
I followed all of this by reading through the Stackoverflow questions about porting newlib and saw a few of the different tutorials (like wiki.osdev.org/Porting_Newlib ) but they also suffer from telling me to implements stubs without mentioning where, who, what, when, or how!
But where should I put them?
You can put them where you like, so long as they exist in the final link. You might incorporate them in the libc library itself, or you might keep that generic, and have the syscalls as a separate target specific object file or library.
You may need to create your own target specific crt0.s and assemble and link it for your target.
A good tutorial by Miro Samek of Quantum Leaps on getting GNU/ARM development up and running is available here. The examples are based on an Atmel AT91 part so you will need to know a little about your NXP device to adapt the start-up code.
A ready made Newlib porting layer for LPC2xxx was available here, but the links ot teh files appear to be broken. The same porting layer is used in Martin Thomas' WinARM project. This is a Windows port of GNU ARM GCC, but the examples included in it are target specific not host specific.
You should only need to modify the porting layer on Newlib, and since it is target and application specific, you need not (in fact probably should not) submit your code to the project.
When I was using newlib that is exactly what I did, blew away crt0.s, syscalls.c and libcfunc.c. My personal preference was to link in the replacement for crt0.s and syscalls.c (rolled the few functions in libcfunc into the syscalls.c replacement) based on the embedded application.
I never had an interest in pushing any of that work back into the distro, so cannot help you there.
You are on the right path though, crt0.S and syscalls.c are where you want to work to customize for your target. Personally I was interested in a C library (and printf) and would primarily neuter all of the functions to return 0 or 1 or whatever it took to get the function to just work and not get in the way of linking, periodically making the file I/O functions operate on linked in data in rom/ram. Basically without replacing or modifying any other files in newlib I had a fair amount of success, so you are on the right path.

Has anyone worked with TestCocoon?

I was trying out TestCocoon the other day, and everything seemed great. I compiled my code using cscl,cslib and cslink and I was expecting this to take care of all the instrumentation. I get some .csmes files and .exe.csmes files, but when I load them into the CoverageBrowser I cannot see anything relevant. No covered/uncovered lines. All the lines are grey.
Is anything else needed in order for TestCocoon to report coverage? Do I need to modify my source files? I also posted on their forums here, but no result:
http://www.testcocoon.org/forum/viewtopic.php?f=8&t=44
I tried this tool with few projects using Visual Studio 2008, and I found:
Pros:
- it can collect results from multiple runs, you can run your software at different machines and collect results together
- it has useful GUI for browsing results
- you can merge coverage from many modules and anlyse it as whole application
- forum works, I submited two problems and got implemented fixtures in few days
- it works almost without any problems (I found two minor compilation problems) with quite complicated sources, with tons of templates, boost::spirit parsers, other boost stuff (including meta-programming modules etc.), STL, Qt (everything together)
- well documented
- it's free
Cons:
- instrumentation is definitely slow
- multi-process single project compilation using Visual Studio 2008 doesn't work, only one file at a time is compiled which makes building slower (you will get better performance building whole solution with many projects)
At this moment I didn't try to use this tool for continuous coverage measurement.
Either way, in my opinion it's worth to try.
BTW, Tony, PC-Lint is static-analysis tool, isn't it? interesting idea to compare it with dynamic-analysis tool...
TestCocoon (now at 1.6.7) works well with the small C code bases we tend to unit test. The performance impact seems about normal for other instrumentation methods we've used.
We are able to extract coverage information in our makefiles and the coverage browser is very useful.
Dont use testcocoon, I am currently using it, and its shoddy as hell. Pay for something better (it will cost alot). It is the ultimate death sentence, seriously, don't do it. Whatever you do, stay away from testcocoon at all costs. Worst move ever. You might as well sell your kids for drug money.

build script - how to do it

About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.

Is there a script that turns a Pharo core image into something more useful, that would include an OmniBrowser?

I cannot use the most recent dev Pharo release because of some strange issues with the compiler built into Pharo. Well. I was wondering if there is a quick way to install all the nifty extras into Pharo that the core image misses, as compared to the dev image.
With all non-core Pharo images come a script which was used to build that image. Just edit that file and drag&drop it on a new core.
You could also tell me what you don't like in the Pharo images so that I can enhance them.
There is also the script I published on the Pharo wiki that I use to build my images:
http://code.google.com/p/pharo/wiki/ImageBuildScripts
Of course it is very specific to my preferences and needs, but you can take it as an example and adapt it to your own needs.
CommandShell works with Pharo 9.10.10. You will hit several errors as you try to load the package due to Pharo lacking MVC, but you can simply proceed past the first bunch and abandon the last one (that tries to actually open a CommandShell in Morphic). At that point, you'll have a class called PipeableOSProcess that can be used very easily to grab output. For example:
(PipeableOSProcess command: 'ls /bin') output
will return the contents of your bin directory as a string.
Ok, OB itself can be easily downloaded using ScriptLoader loadSuperOB.
Damien adds (from comment below):
The problem with that approach is that nobody really maintains it.
Moreover, you miss some configuration steps to enhance the use of OB
(for example, you won't have the OB-based browsers if you ask for the
senders of a message from a workspace)