(How) can I enable allocation data collection when profiling with dotMemory CLT - dotmemory

I'm getting comfortable with dotMemory CLT and I'd like to understand if/how I can enable the collection of allocation data with a command line flag.
With the API, I'm aware of the ability to leverage MemoryProfiler.EnableAllocations, and with the desktop application I simply check a box
But I find no references to this concept with respect to the CLT.
Attempting to use start doesn't do the trick, and poring over dotMemory help start doesn't reveal anything promising.
Is this simply not-supported, or am I missing/mis-understanding a critical section of documentation?

Line copied from dotMemory.exe help start |more
[--collect-alloc|-c] Collect callstack allocation data (impacts performance!)
Example: dotMemory start "C:\Path\To\YourProgram.exe" -c

Related

how to fix the problem of downloading fasttext-model300?

I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data

Flashing target with GHS probe using command line

We're using Greenhills Multi IDE and Greenhills Debug Probe to program and debug our target system (a Coldfire based, bare metal system). Currently I flash the target using the IDE debugger GUI, but I would prefer to use a command line interface to do it.
The documentation is fairly sketchy, and only gives a very simple example. As far as I can tell I should be able to use grun with gflash to do this, but I'm having a hard time figuring out which GUI fields map to which grun options. Anyone with any experience of this?
Basically I need to be able to specify (see image above):
Flash device (this one I've got figured out I think)
Base address
Image file (we use raw images)
Offset in flash
Alternate RAM base
Alternate flash utility
Possibly also alternate MBS script
Any tips, tricks, or pointers to better documentation than the standard GHS one? Would be much appreciated!
Is below screenshot from debugger command reference is of any help? You can use it to download your source on HW. I will be able to share more details is this helps. Or you can share your solution if you had already found it.
I used mpadmin for this purpose.
mpadmin -update <IP-addr_of_your-probe | -usb> firmware.frm

WSO2 BAM Incremental Analysis

According to the documentation here, this feature is experimental but I would like to know if anyone is using it successfully. I already have some data so I am trying use case 4.
I tried to run an update hive query with #Incremental annotation but with it nothing goes into my RDB anymore.
If I remove it, everything is working but I want to take an advantage of this feature, because of the large amount of stored data and the query execution going very slow cause of it.
Any suggestion or help is greatly appreciated.
The incremental analysis feature will be working fine in the partially distributed setup, but it wasn't thoroughly tested in the external hadoop cluster, hence it was marked as 'experimenal'. Anyhow if you find any bugs on these you can report it in jira.
To answer your question, you need to enable the incremental processing for your stream first and then you need to add the incremental annotation.The following are the detailed steps for this.
1) You need add property 'streams.definitions.defn1.enableIncrementalIndex=true' in the streams.properties as explained here file and create a toolbox which consists only the stream definition artefact as explained here.
2) Install the toolbox - This will register the stream definition you mentioned in the toolbox with incremental analysis. On this point on wards the incoming data will be incrementally processed.
3) Now indicate the #Incremental annotation in the query. The first iteration will consider the whole available data as you have enabled the incremental analysis in the middle of the processing, but from next iteration onwards it'll only consider the new bunch of data.
This feature is said as experimental as there may be some critical bugs. We will release a more stable version of BAM with this feature in the next release.

How to autostart a program from floppy disk on a Commodore c64

Good news, my c64 ist still running after lots of years spending time on my attic..
But what I always wanted to know is:
How can I automatically load & run a program from a floppy disk that is already inserted
when I switch on the c64?
Some auto-running command like load "*",8,1 would be adequate...
Regards
MoC
You write that a command that you type in, like LOAD"*",8,1 would be adequate. Can I assume, then, that the only problem with that particular command is that it only loads, but doesn't automatically run, the program? If so, you have a number of solutions:
If it's a machine language program, then you should type LOAD"<FILENAME>",8,1: and then (without pressing <RETURN>) press <SHIFT>+<RUN/STOP>.
If it's a BASIC program, type LOAD"<FILENAME>",8: and then (without pressing <RETURN>) press <SHIFT>+<RUN/STOP>.
It is possible to write a BASIC program such that it automatically runs when you load it with LOAD"<FILENAME>",8,1. To do so, first add the following line to the beginning of your program:
0 POKE770,131:POKE771,164
Then issue the following commands to save the program:
PRINT"{CLR}":POKE770,113:POKE771,168:POKE43,0;POKE44,3:POKE157,0:SAVE"<FILENAME>",8
This is not possible without some custom cartridge.
One way to fix this would be getting the Retro Replay cartridge and hacking your own code for it.
I doubt there is a way to do it; you would need a cartridge which handles this case and I don't think one like that exists.
A better and more suitable solution is EasyFlash actually. Retro Replay is commonly used with its own ROM. Since it is a very useful cartridge by default ROM, I would never flash another ROM to it. Also it is more expensive than EasyFlash if you don't have any of those cartridges.
At the moment, I have Prince Of Persia (!) ROM written to my EasyFlash and when I open my c64, it autoruns just like you asked for.
Not 100% relevant, but C128 can autoboot disks in C128 mode. For example Ultima V (which has musics on C128 but not on C64 or C128 in C64 mode) autoboots.
As for cartridges, I'd recommend 1541 Ultimate 2. It can also run games from module rom images (although Prince of Persia doesn't work for me for some reason, perhaps software issue?), but you also get rather good floppy emulator (which also makes it easier to transfer stuff to real disks), REU, tape interface (if you order it) etc.
If you are working with a ML program, there are several methods. If you aren't worried about ever returning to normal READY prompt without a RESET, you can have a small loader that loads into the stack ($0100-$01FF) The loader would just load the next section of code, then jump to it. It would start at $0102 and needs to be as small as possible. Many times, the next piece to load is only 2 characters, so the file name can be placed at $0100 & $0101. Then all you need to do is set LFS, SETNAM, LOAD, then JMP to it. Fill the rest of the stack area with $01. It is also rather safe to only save $0100-$010d so that the entire program will fit on a single disk block.
One issue with this, is that it clears out past stack entries (so, your program will need to reset the stack pointer back to the top.) If your program tries to do a normal RTS out of itself, random things can occur. If you want to exit the program, you'll need to jmp to the reset vector ($FFFC by default,) to do so.

Print complete control flow through gdb including values of variables

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?