Could LLVM decide the execution order of Machine Function Pass ? The code generated by my Pass has been messed up by the optimisation - optimization

I am new to LLVM backend. What I am trying to do is inserting several register loading and storing between different instructions. So I create a machine function pass to do this job. When I disable the optimisation via -O0, everything works fine.
However, when I enable the optimisation, I found that the code has been optimised in a wrong way. For example, some labels of -O0 code has been kept.
For example it still keeps jl label1; while label1 is not exist in -O3 code.
I am trying to figure out a way to bypass it. My thought is that maybe we can decide the execution order of Machine Function Pass and I could then run my Pass at the very end. Or maybe there is other ways bypass it?
I have been searching it for a while, but I didn't find anything useful.
Thanks for your kind helps !

Related

how to fix the problem of downloading fasttext-model300?

I'm using windows 10 and python 3.3. I tried to download fasttext_model300 to calculate soft cosine similarity between documents, but when I run my python file, it stops after arriving at this statement:
fasttext_model300 = api.load('fasttext-wiki-news-subwords-300')
There are no errors or not responding, It just stops without any reaction.
Does anybody know why it happens?
Thanks
I'd recommend against using the gensim api.load() functionality. It dynamically runs new, unversioned source code from remote servers – which is opaque in its operations & suboptimal for maintaining a secure local configuration, or debugging any issues which occur.
Instead, find the actual exact data files you trust and download them as plain data. Then, use specific library operations, like the KeyedVectors.load_word2vec_format() method, so instantiate exactly the model you need, using precise local-file paths you understand.
Following those steps may make it clearer what, if anything, is going wrong. If it doesn't, try also enabling logging at the INFO level to gather more information about what progress is made before failure (and add any new details as a comment or to your question).
python3 -m gensim.downloader --download fasttext-wiki-news-subwords-300
Try using this. Source : https://awesomeopensource.com/project/RaRe-Technologies/gensim-data

How to find the size of a reg in verilog?

I was wondering if there were a way to compute the size of a reg in Verilog. I researched it quite a bit, and found $size(a), but it's only in SystemVerilog, and it won't work in my verilog program.
Does anyone know an alternative for this??
I also wanted to ask as a side note; I'm having some trouble with my test bench in the sense that when I update a value in the file, that change is not taken in consideration when I simulate. I've been told I might have been using an old test bench but the one I am continuously simulating is the only one available in this project.
EDIT:
To give you an idea of what's the problem: in my code there is a "start" signal and when it is set to 1, the operation starts. Otherwise, it stays idle. I began writing the test bench with start=0, tested it and simulated it, then edited the test bench by setting start to 1. But when I simulate it, the start signal remains 0 in the waveform. I tried to check whether I was using another test bench, but it is the only test bench I am using in this project.
Given that I was on a deadline, I worked on the code so that it would adapt to the "frozen" test bench. I am getting now all the results I want, but I wanted to test some other features of my code, so I created a new project and copy pasted the code in new files (including the same test bench). But when I ran a simulation, the waveform displayed wrong results (even though I was using the exact same code in all modules and test bench). Any idea why?
Any help would be appreciated :)
There is a standardised way to do this, but it requires you to use the VPI, which I don't think you get on Modelsim's student edition. In short, you have to write C code, and dynamically link it to the simulator. In the C code, you can get object properties using routines such as vpi_get. Useful properites might be vpiSize, which is what you want, vpiLeftRange, vpiRightRange, and so on.
Having said all that, Verilog is essentially a static language, and objects have to be declared with a static width using constant expressions. Having a run-time method to determine an object's size is therefore of pretty limited value (since you should already know it), and may not solve whatever problem you actually have. Your question would make more sense for VHDL (and SystemVerilog?), which are much more dynamic.
Note on Icarus: the developers have pushed lots of SystemVerilog stuff back into the main language. If you take advantge of this you may find that your code is not portable.
Second part of your question: you need to be specific on what your problem actually is.

Is it ok to add code with the sole purpose of making it easier to test?

My situation, as some background:
I'm writing a small javascript library which uses window.requestAnimationFrame to perform its animation loop. Because that function isn't standardised across the browsers yet, internally in the library it creates a polyfill-ish function in a closure.
var requestAnim = window.requestAnimationFrame
|| window.webkitRequestAnimationFrame
|| ...
|| function () { ... };
The issue here is that this makes it quite hard for me to test this code now. Previously, when it was using setTimeout, I would override that global function in the tests to simulate a number of frames passing synchronously.
Anyway, to the point of the question:
Right now, it seems like my options are either to leave some of my code untested, or to add extraneous features to the library with the sole purpose of making it easier to test. Neither of these options sound that great to me.
Without worrying too much about my specific case, in general, what should you do in this situation?
Yes, it is ok.
We don't write tests for the sake of testing. Testing is an acknowledgement of the fact that we aren't brillant enough to write and maintain code perfectly without safety checks. All test code serves one purpose and only one: to make a better product. This is true whether it lives in the /test folder or in the /src folder. Therefore it is a mistake to think "This is never called in production, therefore it is wrong to put it into /src!"
To be sure, there are other trade-offs to make, e.g. size (in an embedded product it makes a lot of sense to try everything you can to keep the /src folder small). But that is a completely different reason than merely "It's test-related".
I'd say it's fine to add testing code (unless some micro-optimisation is something you're testing). As Kilian was saying, nobody is perfect; this is the reason we do testing in the first place.
I +1d Kilian's answer, but I'd like to add my own ideas too:
In general, what situations would there be code that you cannot (with ease) test? This would be code that only runs under conditions, which you can't re-create on your testing machine? Perhaps it would be easier to set a variable to decide whether this code should run or not, then you can set a breakpoint and change this variable when debugging (in your JavaScript case, using Firebug or Chrome's developer tools?)
Or, like you say, add some testing code - a set of flags maybe at the top of the script, to keep it neat? Then your if statements could be something like
if(shouldRunThisCode || isTestingThisCode) {
doThisCode();
}
In short: Ofcourse it's fine to add code for the purpose of testing. I can't think of any scenarios where testing the code will require adding much code at all though. If the code is implemented and intended to run at some point under certain conditions anyway, it can never be too hard to test.
In general, code that is hard to test is badly written.1 It is superior to find the design problems your test difficulties are telling you about and fix those than to add code "just to make testing easier". Sometimes we do that anyway, because we construe the design problems as too difficult to fix -- but it should be the second choice. In your case, it sounds like the bad design you're exposing is in a library outside your control. In this case, adding code "just to make testing easier" is probably the best choice; you are unlikely to have enough control over an external library to improve its design.

Updating to PHP 5.3 with deprecated functions warning disabled

I'm very keen to update a number of our servers to PHP 5.3. This would be in readiness for Zend Framework 2 and also for the apparent performance updates. Unfortunately, i have large amounts of legacy code on these servers which in time will be fixed, but cannot all be fixed before the migration. I'm considering updating but disabling the deprecated function error on all but a few development sites where i can begin to work through updating old code.
error_reporting(E_ALL ^ E_DEPRECATED);
Is there any fundamental reason why this would be a bad idea?
Well, you could forget that you set the flag and wonder why your application breaks in a next PHP update. It can be very frustrating to debug an application without proper error reporting. That's one reason I can think of.
However, if you do it, document it somewhere. It can save you a couple of hours before you remember setting the flag at all.
If you haven't already you should read the migration guide with particular focus on Backward Incompatible Changes and Removed Extensions.
You have bigger issues than deprecation. Ignoring E_DEPRECATED will not suffice. Because of the incompatible changes there will also be other type of errors or, maybe, even worse, unexpected behaviors.
Here's a simple example:
<?php
function goto($line){
echo $line;
}
goto(7);
?>
This code will work fine and output 7 in PHP 5.2.x but will give you a parse error in PHP 5.3.x.
What you need to do is take each item in that guide and check your code and update where needed. To make this faster you could ignore the deprecated functionality in a first phase and just disable error reporting for E_DEPRECATED, but you can't assume that you will only get some harmless warnings when porting to another major PHP branch.
Also don't forget about your hack and fix the deprecated issues as soon as possible.
Regards,
Alin
Note: I tried to answer the question from a practical point of view, so please don't tell me that ignoring warnings is bad. I know that, but I also know that time is not an infinite resource.
I presume you have some kind of test server? If not, you really should set one up and test your code in PHP 5.3. If your code is thoroughly Unit Tested, testing it will take seconds, and fixing it will be fairly quick too, as the unit tests will tell you exactly where to look. If not, then consider making Unit Testing it all a priority before the next release, and in the meantime go through it all, first with E_DEPRECATED warnings disabled and fix anything which comes up, then with it re-enabled once you have time. You could also run a global find-and-replace for easier to fix errors.

Can Make undefine a variable?

I'm working in an embedded system (RTXC) where I need to disable the debugger functionality which is enabled through a #define command. However, when I change the #define to undefine, compilation goes off fine, but when the linker runs, it encounters an error about a symbol not existing that belongs to the debug code (which should have been taken care of by the debugger variable not being defined). Is there any way for Make to ensure that a preprocessor variable does not get defined or stays undefined ?
The answer to your question is no, Make can't absolutely prevent a variable from being defined by, say, a #define expression in the code.
You seem to have an elusive problem. It could be a bug in your Makefiles, a misspelled directive, a bad macro (if you'll pardon the tautology) or something trivial. I'd suggest burning the forest: cut out everything until the problem stops, then see where it was hiding. If you get down to HelloWorld and the problem persists, let us know.
No. You will need to fix the bug in your code.
More specifically, there is something that is referencing the debug side of things outside of an #ifdef. Make won't be able to help you there.
Another possibility is that you have a .o or something left over from a previous build; you might want to try cleaning the build tree.