SystemC cannot use "+" "-" operators in visual studio 2019 - systemc

enter image description here
I am trying to build the counter, when the "dec1" signal is high, the 8-bit unsign integer counter will decrese by 1. I am using visual sidio 2019 to complie the counter.cpp file, and "Hello worlds" .cpp is successful to run.

A SystemC signal does not provide access to all member functions of its internal value type, only an implicit conversion. You'll need to write the long form of
counter1 = counter1 + 1;

systemc cannot do signal op,
so every thing should have incomming signal and do the type casting.
sc_in > counter;
counter = counter + 1; //will not work
sc_uint<16> local_counter = counter.read(); //this will work
local_counter = local_counter + 1; //will work

Related

I am getting error when check my systemverilog code in quartus II

I have this simple code checked with Quartus II. First, It gives me error 5000 iterations for loop limit then I try to change verilog constant loop limit variable in settings and now it is giving me this error
Error (293007): Current module quartus_map ended unexpectedly. Verify that you have sufficient memory available to compile your design. You can view disk space and physical RAM requirements on the System and Software Requirements page of the Intel FPGA website (http://dl.altera.com/requirements/).
Is this something related to tool limitation or am I doing something wrong with my code ?
Here is my code:
module Branch_status_table #(parameter BST_length = 16383) //16383
(
output reg [2:1] status,
output reg [32:1] PC_predict_o,
input wire [2:1] status_update,
input wire [32:1] PC_in, PC_update,
input wire [32:1] PC_predict_update,
input wire clk,en_1,RST
);
wire [14:1] PC_index, PC_index_update;
//Internal memory
reg [2:1] status_bits [BST_length:0];
reg [32:1] PC_predict [BST_length:0];
reg [16:1] PC [BST_length:0];
//Combinational
assign PC_index = PC_in [16:3];
assign PC_index_update = PC_update [16:3];
//
initial begin
for ( int i=0; i <= BST_length; i=i+1) begin
status_bits[i] <= 0;
PC_predict[i] <= 0;
PC[i]<=0;
end
end
//Prediction
always_ff #(posedge clk) begin
if ( (PC[PC_index]==PC_in[32:17]) && (status_bits[PC_index]!=0) ) begin
status <= status_bits [PC_index];
PC_predict_o <= PC_predict [PC_index];
end
else begin
status <= 0;
PC_predict_o <= 0;
end
end
//Update
always_ff #(posedge clk) begin
if (en_1==1) begin
status_bits[PC_index_update] <= status_update;
PC [PC_index_update] <= PC_update[32:17] ;
PC_predict[PC_index_update] <= PC_predict_update;
end
else begin
status_bits[PC_index_update] <= status_bits[PC_index_update] ;
PC [PC_index_update] <= PC [PC_index_update] ;
PC_predict[PC_index_update] <= PC_predict[PC_index_update] ;
end
end
endmodule
There a coding issue and maybe a resource utilization issue.
The coding issue:
The code infers block ram, and is attempting initialize/reset it.
In general you can't reset block rams using a single initial block.
That style of initialization of arrays can be done in a testbench, not in RTL.
Synthesis tools have physical limits.
To reset/initialize a block ram you must write 0 to each address. Use the same type of synchronous process (always #(posedge clk)) because the memory is a synchronous device. Put a mux in front of the write port and use a state machine a start up to write 0's to every address, then when the state machine finishes the state that move to an state where the BRAM behaves like you normally want it to.
This page discusses the issue.
https://www.edaboard.com/threads/how-to-clear-reset-my-bram-in-vhdl-xilinx.247572/
This is a Xilinx related answer; other vendors work the same way.
Summarizing the coding issue: You probably don't need to initialize and you can't do it this way:
initial begin
for ( int i=0; i <= BST_length; i=i+1) begin
status_bits[i] <= 0;
PC_predict[i] <= 0;
PC[i]<=0;
end
end
Potential Utilization Issue:
FPGAs' have finite resources.
The tool seems to be indicating that the code infers more block ram than the part has. To verify this, change parameter BST_length from 16K to something small like 8 and see if the utilization issue ("insufficient memory") goes away. If it goes away then re-design using less memory.
This can also be analyzed by hand knowing how much BRAM the part has and how much you are inferring. Don't infer more than the part has.
The tool is giving different answers with simple code changes because of two issues related to the same inference of BRAM. When the code changes a little the tool switches and informs about the other issue. This is how the tools work sometimes, a first error can mask a second by erroring out so that the 2nd error is not reached.

Measuring Program Execution Time with Cycle Counters

I have confusion in this particular line-->
result = (double) hi * (1 << 30) * 4 + lo;
of the following code:
void access_counter(unsigned *hi, unsigned *lo)
// Set *hi and *lo to the high and low order bits of the cycle
// counter.
{
asm("rdtscp; movl %%edx,%0; movl %%eax,%1" // Read cycle counter
: "=r" (*hi), "=r" (*lo) // and move results to
: /* No input */ // the two outputs
: "%edx", "%eax");
}
double get_counter()
// Return the number of cycles since the last call to start_counter.
{
unsigned ncyc_hi, ncyc_lo;
unsigned hi, lo, borrow;
double result;
/* Get cycle counter */
access_counter(&ncyc_hi, &ncyc_lo);
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
result = (double) hi * (1 << 30) * 4 + lo;
if (result < 0) {
fprintf(stderr, "Error: counter returns neg value: %.0f\n", result);
}
return result;
}
The thing I cannot understand is that why is hi being multiplied with 2^30 and then 4? and then low added to it? Someone please explain what is happening in this line of code. I do know that what hi and low contain.
The short answer:
That line turns a 64bit integer that is stored as 2 32bit values into a floating point number.
Why doesn't the code just use a 64bit integer? Well, gcc has supported 64bit numbers for a long time, but presumably this code predates that. In that case, the only way to support numbers that big is to put them into a floating point number.
The long answer:
First, you need to understand how rdtscp works. When this assembler instruction is invoked, it does 2 things:
1) Sets ecx to IA32_TSC_AUX MSR. In my experience, this generally just means ecx gets set to zero.
2) Sets edx:eax to the current value of the processor’s time-stamp counter. This means that the lower 64bits of the counter go into eax, and the upper 32bits are in edx.
With that in mind, let's look at the code. When called from get_counter, access_counter is going to put edx in 'ncyc_hi' and eax in 'ncyc_lo.' Then get_counter is going to do:
lo = ncyc_lo - cyc_lo;
borrow = lo > ncyc_lo;
hi = ncyc_hi - cyc_hi - borrow;
What does this do?
Since the time is stored in 2 different 32bit numbers, if we want to find out how much time has elapsed, we need to do a bit of work to find the difference between the old time and the new. When it is done, the result is stored (again, using 2 32bit numbers) in hi / lo.
Which finally brings us to your question.
result = (double) hi * (1 << 30) * 4 + lo;
If we could use 64bit integers, converting 2 32bit values to a single 64bit value would look like this:
unsigned long long result = hi; // put hi into the 64bit number.
result <<= 32; // shift the 32 bits to the upper part of the number
results |= low; // add in the lower 32bits.
If you aren't used to bit shifting, maybe looking at it like this will help. If lo = 1 and high = 2, then expressed as hex numbers:
result = hi; 0x0000000000000002
result <<= 32; 0x0000000200000000
result |= low; 0x0000000200000001
But if we assume the compiler doesn't support 64bit integers, that won't work. While floating point numbers can hold values that big, they don't support shifting. So we need to figure out a way to shift 'hi' left by 32bits, without using left shift.
Ok then, shifting left by 1 is really the same as multiplying by 2. Shifting left by 2 is the same as multiplying by 4. Shifting left by [omitted...] Shifting left by 32 is the same as multiplying by 4,294,967,296.
By an amazing coincidence, 4,294,967,296 == (1 << 30) * 4.
So why write it in that complicated fashion? Well, 4,294,967,296 is a pretty big number. In fact, it's too big to fit in an 32bit integer. Which means if we put it in our source code, a compiler that doesn't support 64bit integers may have trouble figuring out how to process it. Written like this, the compiler can generate whatever floating point instructions it might need to work on that really big number.
Why the current code is wrong:
It looks like variations of this code have been wandering around the internet for a long time. Originally (I assume) access_counter was written using rdtsc instead of rdtscp. I'm not going to try to describe the difference between the two (google them), other than to point out that rdtsc does not set ecx, and rdtscp does. Whoever changed rdtsc to rdtscp apparently didn't know that, and failed to adjust the inline assembler stuff to reflect it. While your code might work fine despite this, it might do something weird instead. To fix it, you could do:
asm("rdtscp; movl %%edx,%0; movl %%eax,%1" // Read cycle counter
: "=r" (*hi), "=r" (*lo) // and move results to
: /* No input */ // the two outputs
: "%edx", "%eax", "%ecx");
While this will work, it isn't optimal. Registers are a valuable and scarce resource on i386. This tiny fragment uses 5 of them. With a slight modification:
asm("rdtscp" // Read cycle counter
: "=d" (*hi), "=a" (*lo)
: /* No input */
: "%ecx");
Now we have 2 fewer assembly statements, and we only use 3 registers.
But even that isn't the best we can do. In the (presumably long) time since this code was written, gcc has added both support for 64bit integers and a function to read the tsc, so you don't need to use asm at all:
unsigned int a;
unsigned long long result;
result = __builtin_ia32_rdtscp(&a);
'a' is the (useless?) value that was being returned in ecx. The function call requires it, but we can just ignore the returned value.
So, instead of doing something like this (which I assume your existing code does):
unsigned cyc_hi, cyc_lo;
access_counter(&cyc_hi, &cyc_lo);
// do something
double elapsed_time = get_counter(); // Find the difference between cyc_hi, cyc_lo and the current time
We can do:
unsigned int a;
unsigned long long before, after;
before = __builtin_ia32_rdtscp(&a);
// do something
after = __builtin_ia32_rdtscp(&a);
unsigned long long elapsed_time = after - before;
This is shorter, doesn't use hard-to-understand assembler, is easier to read, maintain and produces the best possible code.
But it does require a relatively recent version of gcc.

ROL / ROR on variable using inline assembly only in Objective-C [duplicate]

This question already has answers here:
ROL / ROR on variable using inline assembly in Objective-C
(2 answers)
Closed 9 years ago.
A few days ago, I asked the question below. Because I was in need of a quick answer, I added:
The code does not need to use inline assembly. However, I haven't found a way to do this using Objective-C / C++ / C instructions.
Today, I would like to learn something. So I ask the question again, looking for an answer using inline assembly.
I would like to perform ROR and ROL operations on variables in an Objective-C program. However, I can't manage it – I am not an assembly expert.
Here is what I have done so far:
uint8_t v1 = ....;
uint8_t v2 = ....; // v2 is either 1, 2, 3, 4 or 5
asm("ROR v1, v2");
the error I get is:
Unknown use of instruction mnemonic with unknown size suffix
How can I fix this?
A rotate is just two shifts - some bits go left, the others right - once you see this rotating is easy without assembly. The pattern is recognised by some compilers and compiled using the rotate instructions. See wikipedia for the code.
Update: Xcode 4.6.2 (others not tested) on x86-64 compiles the double shift + or to a rotate for 32 & 64 bit operands, for 8 & 16 bit operands the double shift + or is kept. Why? Maybe the compiler understands something about the performance of these instructions, maybe the just didn't optimise - but in general if you can avoid assembler do so, the compiler invariably knows best! Also using static inline on the functions, or using macros defined in the same way as the standard macro MAX (a macro has the advantage of adapting to the type of its operands), can be used to inline the operations.
Addendum after OP comment
Here is the i86_64 assembler as an example, for full details of how to use the asm construct start here.
First the non-assembler version:
static inline uint32 rotl32_i64(uint32 value, unsigned shift)
{
// assume shift is in range 0..31 or subtraction would be wrong
// however we know the compiler will spot the pattern and replace
// the expression with a single roll and there will be no subtraction
// so if the compiler changes this may break without:
// shift &= 0x1f;
return (value << shift) | (value >> (32 - shift));
}
void test_rotl32(uint32 value, unsigned shift)
{
uint32 shifted = rotl32_i64(value, shift);
NSLog(#"%8x <<< %u -> %8x", value & 0xFFFFFFFF, shift, shifted & 0xFFFFFFFF);
}
If you look at the assembler output for profiling (so the optimiser kicks in) in Xcode (Product > Generate Output > Assembly File, then select Profiling in the pop-up menu as the bottom of the window) you will see that rotl32_i64 is inlined into test_rotl32 and compiles down to a rotate (roll) instruction.
Now producing the assembler directly yourself is a bit more involved than for the ARM code FrankH showed. This is because to take a variable shift value a specific register, cl, must be used, so we need to give the compiler enough information to do that. Here goes:
static inline uint32 rotl32_i64_asm(uint32 value, unsigned shift)
{
// i64 - shift must be in register cl so create a register local assigned to cl
// no need to mask as i64 will do that
register uint8 cl asm ( "cl" ) = shift;
uint32 shifted;
// emit the rotate left long
// %n values are replaced by args:
// 0: "=r" (shifted) - any register (r), result(=), store in var (shifted)
// 1: "0" (value) - *same* register as %0 (0), load from var (value)
// 2: "r" (cl) - any register (r), load from var (cl - which is the cl register so this one is used)
__asm__ ("roll %2,%0" : "=r" (shifted) : "0" (value), "r" (cl));
return shifted;
}
Change test_rotl32 to call rotl32_i64_asm and check the assembly output again - it should be the same, i.e. the compiler did as well as we did.
Further note that if the commented out masking line in rotl32_i64 is included it essentially becomes rotl32 - the compiler will do the right thing for any architecture all for the cost of a single and instruction in the i64 version.
So asm is there is you need it, using it can be somewhat involved, and the compiler will invariably do as well or better by itself...
HTH
The 32bit rotate in ARM would be:
__asm__("MOV %0, %1, ROR %2\n" : "=r"(out) : "r"(in), "M"(N));
where N is required to be a compile-time constant.
But the output of the barrel shifter, whether used on a register or an immediate operand, is always a full-register-width; you can shift a constant 8-bit quantity to any position within a 32bit word, or - as here - shift/rotate the value in a 32bit register any which way.
But you cannot rotate 16bit or 8bit values within a register using a single ARM instruction. None such exists.
That's why the compiler, on ARM targets, when you use the "normal" (portable [Objective-]C/C++) code (in << xx) | (in >> (w - xx)) will create you one assembler instruction for a 32bit rotate, but at least two (a normal shift followed by a shifted or) for 8/16bit ones.

vb xor checksum

This question may already have been asked but nothing on SO actually gave me the answer I need.
I am trying to reverse engineer someone else's vb.NET code and I am stuck with what a Xor is doing here. Here is 1 line of the body of a soap request that gets parsed (some values have been obscured so the checksum may not work in this case):
<HD>CHANGEDTHIS01,W-A,0,7753.2018E,1122.6674N, 0.00,1,CID_V_01*3B</HD>
and this is the snippet of vb code that checks it
LastStar = strValues(CheckLoop).IndexOf("*")
StrLen = strValues(CheckLoop).Length
TransCheckSum = Val("&h" + strValues(CheckLoop).Substring(LastStar + 1, (StrLen - (LastStar + 1))))
CheckSum = 0
For CheckString = 0 To LastStar - 1
CheckSum = CheckSum Xor Asc(strValues(CheckLoop)(CheckString))
Next '
If CheckSum <> TransCheckSum Then
'error with the checksum
...
OK, I get it up to the For loop. I just need an explanation of what the Xor is doing and how that is used for the checksum.
Thanks.
PS: As a bonus, if anyone can provide a c# translation I would be most grateful.
Using Xor is a simple algorithm to calculate a checksum. The idea is the same as when calculating a parity bit, but there is eight bits calculated across the bytes. More advanced algorithms like CRC and MD5 are often used to calculate checksums for more demanding applications.
The C# code would look like this:
string value = strValues[checkLoop];
int lastStar = value.IndexOf("*");
int transCheckSum = Convert.ToByte(value.Substring(lastStar + 1, 2), 16);
int checkSum = 0;
for (int checkString = 4; checkString < lastStar; checkString++) {
checkSum ^= (int)value[checkString];
}
if (checkSum != transCheckSum) {
// error with the checksum
}
I made some adjustments to the code to accomodate the transformation to C#, and some things that makes sense. I declared the variables used, and used camel case rather than Pascal case for local variables. I use a local variable for the string, instead of getting it from the collection each time.
The VB Val method stops parsing when it finds a character that it doesn't recognise, so to use the framework methods I assumed that the length of the checksum is two characters, so that it can parse the string "3B" rather than "3B</HD>".
The loop starts at the fourth character, to skip the first "<HD>", which should logically not be part of the data that the checksum should be calculated for.
In C# you don't need the Asc function to get the character code, you can just cast the char to an int.
The code is basically getting the character values and doing a Xor in order to check the integrity, you have a very nice explanation of the operation in this page, in the Parity Check section : http://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/xor.html

Looking for decimal to alphanumeric number base converter library in Visual Basic (or pseudo code algorithm) WITHOUT recursion

I'm looking for a decimal to alphanumeric number base converter library in Visual Basic that does not use recursion.
I found:
http://visualstudiomagazine.com/articles/2010/07/20/when-hexadecimal-is-not-enough.aspx
which includes a demo app but discovered that it uses recursion. The problem with it using recursion became apparent when I attempted to integrate the library into my own Visual Studio Express 2010 Visual Basic project: I got a stack overflow exception.
Now I could consider increasing the size memory allocated for the stack but it might be hard to determine what this would be, given that the recursion depth might vary depending on the value to be converted.
My situation requires a reliable deterministic solution so I would prefer to discount the idea of using recursion.
I shall do more research and endeavour to write the algorithm from scratch but would rather not re-invent the wheel if it already exists so hence this question. A search on here did not quite give me what I was looking for.
Can you point me in the direction of an existing non-recursive decimal to alphanumeric converter library in Visual Basic?
This solution I provide myself appears to work - so simple too! But that's because it is a one-way conversion; the other libraries aim to be two-way conversion, back and forth between difference bases - but I don't need both ways.
Public Class BaseConverter
Public Shared Function ConvertToBase(num As Integer, nbase As Integer) As String
Dim retval = ""
Dim chars As String = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
' check if we can convert to another base
If (nbase chars.Length) Then
retval = ""
End If
Dim r As Integer
Dim newNumber As String = ""
' in r we have the offset of the char that was converted to the new base
While num >= nbase
r = num Mod nbase
newNumber = chars(r) & newNumber
'use: num = Convert.ToInt32(num / nbase)
'(if available on your system)
'otherwise:
num = num \ nbase
' notice the back slash \ - this is integer division, i.e the calculation only reports back the whole number of times that
' the divider will go into the number to be divided, e.g. 7 \ 2 produces 3 (contrasted with 7 / 2 produces 3.5,
' float which would cause the compiler to fail the compile with a type mismatch)
End While
' the last number to convert
newNumber = chars(num) & newNumber
Return newNumber
End Function
End Class
I created the above code in Visual Basic based on the C# code in the following link:
CREDIT: http://social.msdn.microsoft.com/Forums/en-US/csharpgeneral/thread/5babf71f-4375-40aa-971a-21c1f0b9762b/
("convert from decimal(base-10) to alphanumeric(base-36)")
public String ConvertToBase(int num, int nbase)
{
String chars = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ";
// check if we can convert to another base
if(nbase chars.Length)
return "";
int r;
String newNumber = "";
// in r we have the offset of the char that was converted to the new base
while(num >= nbase)
{
r = num % nbase;
newNumber = chars[r] + newNumber;
num = num / nbase;
}
// the last number to convert
newNumber = chars[num] + newNumber;
return newNumber;
}
#assylias I couldn't get devx.com/vb2themax/Tip/19316 to work - I got the wrong value back. Thanks though for the suggestion.
There is no evidence that it works. I adjusted some declarations and superficial structure of the code to get it to build successfully in Visual Studio Express 2010 Visual Basic. Then stepped through the code in the Visual Studio Express 2010 Visual Basic debugger, the code is hard to follow: variable names not obvious, no comments as to why it is doing what it is doing. From what I understood it was doing, it did not seem like it should be doing that to do the base conversion.