the new SystemC library 2.3.0 was released in July, 2012.It was reported to be able to support modeling of concepts such as power domains and abstract schedulers. Has anyone checked or worked on how SystemC 2.3.0 can support the modeling of power domains and abstract schedulers? Any recommendation of references are appreciated!
SystemC IEEE Std 1666-2011 includes "new process control extensions, which enable and simplify the modelling of power domains and abstract schedulers" according to this website!.
So it is these new process control extensions that provide the primitives for power-domain/schedulers modeling.
I examined the SystemC IEEE Std 1666-2005 LRM, and, indeed, the sc_process_handle class now has more member functions: suspend, resume, disable and enable, sync_reset_on and sync_reset_off, kill and reset, throw_it.
You can follow this example from the LRM to implement power domains (e.g., by disabling/enabling or resetting processes in response to events that trigger power shutdown or power-up sequences):
struct M1: sc_module
{
M1(sc_module_name _name)
{
SC_THREAD(ticker);
SC_THREAD(calling);
SC_THREAD(target);
t = sc_get_current_process_handle();
}
sc_process_handle t;
sc_event ev;
void ticker()
{
for (;;)
{
wait(10, SC_NS);
ev.notify();
}
}
void calling()
{
wait(15, SC_NS);
// Target runs at time 10 NS due to notification
t.suspend();
wait(10, SC_NS);
// Target does not run at time 20 NS while suspended
t.resume();
// Target runs at time 25 NS when resume is called
wait(10, SC_NS);
// Target runs at time 30 NS due to notification
t.disable();
wait(10, SC_NS);
// Target does not run at time 40 NS while disabled
t.enable();
// Target does not run at time 45 NS when enable is called
wait(10, SC_NS);
// Target runs at time 50 NS due to notification
sc_stop();
}
void target()
{
for (;;)
{
wait(ev);
cout << "Target awoke at " << sc_time_stamp() << endl;
}
}
SC_HAS_PROCESS(M1);
};
Related
#include <Wire.h>
#include "MAX30100_PulseOximeter.h"
#define REPORTING_PERIOD_MS 1000
// PulseOximeter is the higher level interface to the sensor
// it offers:
// * beat detection reporting
// * heart rate calculation
// * SpO2 (oxidation level) calculation
PulseOximeter pox;
uint32_t tsLastReport = 0;
// Callback (registered below) fired when a pulse is detected
void onBeatDetected()
{
Serial.println("Beat!");
}
void setup()
{
Serial.begin(115200);
Serial.print("Initializing pulse oximeter..");
// Initialize the PulseOximeter instance
// Failures are generally due to an improper I2C wiring, missing power supply
// or wrong target chip
if (!pox.begin()) {
Serial.println("FAILED");
for(;;);
} else {
Serial.println("SUCCESS");
}
// The default current for the IR LED is 50mA and it could be changed
// by uncommenting the following line. Check MAX30100_Registers.h for all the
// available options.
// pox.setIRLedCurrent(MAX30100_LED_CURR_7_6MA);
// Register a callback for the beat detection
pox.setOnBeatDetectedCallback(onBeatDetected);
}
void loop()
{
// Make sure to call update as fast as possible
pox.update();
// Asynchronously dump heart rate and oxidation levels to the serial
// For both, a value of 0 means "invalid"
if (millis() - tsLastReport > REPORTING_PERIOD_MS) {
Serial.print("Heart rate:");
Serial.print(pox.getHeartRate());
Serial.print("bpm / SpO2:");
Serial.print(pox.getSpO2());
Serial.println("%");
tsLastReport = millis();
}
}
I want to output oxygen saturation with Arduino.
If you run it and turn on the serial monitor, only the Initializing pulse oximeter works, and no data is transmitted after that.
I want to output the values of oxygen saturation and pulse received from the sensor once per second on the serial monitor.
I am on the implementation of an interrupt controller simulator, which will take signals from other rest of the HW modules in simulation and run the ISR.
Below is the SystemC code roughly made to get the concept clear. In this case, we need ISR to be handled in a way that, even if the FW_main is stuck inside while(1) loop.
With the below implementation the context is inside FW_main loop only. Adding a wait in FW_main is not the one we want. We need the correct interrupt controller functionality. Any ideas to get rid of this problem?
SC_MODULE (processor)
{
sc_in < bool > interrupt;
void ISR(void)
{
cout << "i am in ISR\n";
}
void FW_main(void)
{
while(1)
{
cout << "i am in FW_main\n";
}
}
SC_CTOR (processor)
{
SC_METHOD(ISR);
sensitive << interrupt;
SC_THREAD(FW_main);
}
};
Unfortunately SystemC processes are cooperative, not preemptive. Even the SystemC kernel can't step in and suspend the FW_main method.
No processor system / FW truly gets stuck in a while loop this way. Any instruction set simulator must walk the time in steps on some sort of strobes or events, ideally clock edges.
Functional representation of a system you are trying to model would look something like follows.
SC_MODULE (processor)
{
sc_in < bool > clk;
sc_in < bool > interrupt;
void ISR(void)
{
cout << "i am in ISR\n";
}
void FW_main(void)
{
cout << "i am in FW_main\n";
}
SC_CTOR (processor)
{
SC_METHOD(ISR);
sensitive << interrupt;
SC_METHOD(FW_main);
sensitive << clk;
}
};
There are two problems in above code I suggested. First, you probably don't want an actual clock signal that needs toggling externally or any sense of time at all for that matter. Second, in a single core processor system, ISRs and FW_Main aren't really parallel in nature. A more realistic implementation of what you are trying to model would be as follows.
SC_MODULE(processor)
{
sc_in < bool > interrupt;
void ISR(void)
{
cout << "i am in ISR\n";
}
void FW_main(void)
{
if(interrupt.read())
{
ISR();
}
cout << "i am in FW_main\n";
next_trigger(SC_ZERO_TIME, interrupt);
}
SC_CTOR (processor)
{
SC_METHOD(FW_main);
}
};
The next_trigger(SC_ZERO_TIME, interrupt) statement makes the FW_main emulate while(1) while also being sensitive to interrupt input.
I am currently trying to use the performance monitor to generate an interrupt when an overflow of Data Cache misses occurs. I have enabled the pmu and the IRQ for the performance monitor (PMINTENSET is 1 for the counter). I am able to see that the overflow flag is set when the overflow occurs but the interrupt is never triggered. I think I am missing something when setting up the interrupt. I am using Xilinx SDK 2018.2.
I have attached my code for setting up the interrupt:
XScuGic xInterruptController; /* Interrupt controller instance */
static void setup_interrupt(void)
{
uint32_t status;
XScuGic_Config *pxGICConfig;
pxGICConfig = XScuGic_LookupConfig( XPAR_SCUGIC_0_DEVICE_ID );
if (pxGICConfig==NULL)
{
xil_printf("\nERROR LOOKING UP CONFIGURATION");
for(;;);
}
status = XScuGic_CfgInitialize( &xInterruptController, pxGICConfig, pxGICConfig->CpuBaseAddress );
if (status != XST_SUCCESS)
{
xil_printf("\nERROR INITIALIZING CONFIGURATION");
for(;;);
}
status = XScuGic_SelfTest(&xInterruptController);
if (status != XST_SUCCESS)
{
xil_printf("\nERROR: SELF TEST FAILURE");
for(;;);
}
/*
* Initialize the exception table.
*/
Xil_ExceptionInit();
status = RegisterInterruptExceptions(&xInterruptController);
if (status != XST_SUCCESS) {
xil_printf("\nERROR: SetUP Interrupt System Failed");
for(;;);
}
status = XScuGic_Connect( &xInterruptController, XPS_PMU0_INT_ID, (Xil_ExceptionHandler) pmuIRQ_handler, ( void * ) &xInterruptController);
if (status!= XST_SUCCESS)
{
xil_printf("\nERROR CONNECTING INTERRUPT");
for(;;);
}
XScuGic_SetPriorityTriggerType(&xInterruptController, XPS_PMU0_INT_ID, 8, 0b10); // Priority 8 (second highest) and high level sensitivity
XScuGic_InterruptMaptoCpu(&xInterruptController, 0, XPS_PMU0_INT_ID);
// Enable the interrupt for the xTimer in the interrupt controller.
XScuGic_Enable( &xInterruptController, XPS_PMU0_INT_ID );
}
int RegisterInterruptExceptions(XScuGic *XScuGicInstancePtr)
{
/*
* Connect the interrupt controller interrupt handler to the hardware
* interrupt handling logic in the ARM processor.
*/
Xil_ExceptionRegisterHandler(XIL_EXCEPTION_ID_INT, (Xil_ExceptionHandler) XScuGic_InterruptHandler,XScuGicInstancePtr);
/*
* Enable interrupts in the ARM
*/
Xil_ExceptionEnable();
return XST_SUCCESS;
}
void pmuIRQ_handler( void *CallbackRef )
{
xil_printf("Interrupt occurred\n");
}
I am not sure if I need to use Vivado to map the PMU interrupt to the GIC? I couldn't find any examples on generating interrupts using the performance monitor. I am currently using the default ZC706 HW platform provided by Xilinx SDK and I am not sure if I need to generate a bitstream in Vivado the maps the PMU to the GIC? I thought that this was done by using XScuGic_InterruptMaptoCpu().
I tried with both XPS_PMU0_INT_ID and XPS_PMU1_INT_ID, but neither worked. I tried to follow this post on using shared peripheral interrupts since PMU is this type of interrupt: https://forums.xilinx.com/t5/Processor-System-Design-and-AXI/Using-Private-and-Shared-interrupts-on-Zynq/m-p/773673
Thanks for the help,
Javier
The last parameter is incorrect. It should be 0b01 For high-level sensitivity instead of 0b10, as shown below:
XScuGic_SetPriorityTriggerType(&xInterruptController, XPS_PMU0_INT_ID, 8, 0b01); // Priority 8 (second highest) and high level sensitivity
I'm trying to evaluate a routing technique implemented by me with Mininet, Open vSwitch and Ryu controller. But currently I'm unable to figure out the measurement techniques of packet processing time within switch. I can measure probe message processing time as packet_in occurs for those and reports back to controller program. But how to measure processing time for packets whose presence will not be reported back to the controller by switch(packet_in will not occur)? Probably ovs-ofctl command has some options that can report me the time. But still not sure how to do that. Please help me in this circumstance. I have not got enough resources over the internet. Thanks in advance for your help.
As long as you're using the kernel datapath of Open vSwitch, you should be able to retrieve the processing delay for each packet using the usual Linux tracing toolkits.
Below is an example using the BPF infrastructure (requires Linux v4.4+) and the bcc toolkit (I have version 0.5.0-1). Note, however, that for high packet rates, the overhead from running this tool may be significant. Another way to measure the overhead your modifications add is to measure the maximum throughput the switch can achieve with and without your modifications.
#!/usr/bin/env python
from bcc import BPF
import sys
import ctypes as ct
prog = """
#include <uapi/linux/ptrace.h>
#include <linux/openvswitch.h>
struct vport;
enum action_t {
DROP = 0,
OUTPUT,
};
struct proc_record_t {
u64 delay;
enum action_t action;
};
BPF_HASH(pkts, struct sk_buff *, u64, 1024);
BPF_PERF_OUTPUT(events);
// Take a timestamp at packet reception by Open vSwitch.
int
kprobe__ovs_vport_receive(struct pt_regs *ctx, struct vport *port, struct sk_buff *skb) {
u64 ts = bpf_ktime_get_ns();
pkts.update(&skb, &ts);
return 0;
}
// Once the packet has been processed by the switch, measure the processing delay and send to userspace using perf_submit.
static inline void
end_processing(struct pt_regs *ctx, struct sk_buff *skb, enum action_t action) {
u64 *tsp = pkts.lookup(&skb);
if (tsp) {
u64 ts = bpf_ktime_get_ns();
struct proc_record_t record = {};
record.delay = ts - *tsp;
record.action = action;
events.perf_submit(ctx, &record, sizeof(record));
pkts.delete(&skb);
}
}
// Called when packets are dropped by Open vSwitch.
int
kprobe__consume_skb(struct pt_regs *ctx, struct sk_buff *skb) {
end_processing(ctx, skb, DROP);
return 0;
}
// Called when packets are outputted by Open vSwitch.
int
kprobe__ovs_vport_send(struct pt_regs *ctx, struct vport *vport, struct sk_buff *skb) {
end_processing(ctx, skb, OUTPUT);
return 0;
}
"""
b = BPF(text=prog)
class Data(ct.Structure):
_fields_ = [("delay", ct.c_ulonglong),
("action", ct.c_int)]
actions = ["drop", "output"]
print("%-18s %s" % ("DELAY(ns)", "ACTION"))
# Callback function to display information from kernel
def print_event(cpu, data, size):
event = ct.cast(data, ct.POINTER(Data)).contents
print("%-18d %s" % (event.delay, actions[event.action]))
b["events"].open_perf_buffer(print_event)
while True:
b.kprobe_poll()
You'll need to install bcc to execute this script. Then, it's as simple as:
$ sudo python trace_processing_time.py
DELAY(ns) ACTION
97385 drop
55630 drop
38768 drop
61113 drop
10382 output
14795 output
See the bcc documentation for details on how this script works. You will need to change it if you want to support more OpenFlow actions (only drop and output currently).
I need to block and wake a process using SIGUSR2 and SIGUSR1 respectively. Below here's my signal handler sub routine. How do I wake a process blocked by pause?
void sig_handler(int sig) {
static int i = 1;
if(sig == SIGUSR2) {
pause();
}
else if(sig == SIGUSR1) {
/* I don't what to write here */
}
}
Also, I read somewhere pause() is not a good programming practice, is there any other means to suspend a process for some time?
See this page
In general, doing a lot of works in signals is ... tricky. Some things are not async-signal-safe, and therefore it makes robust programming there a bit difficult. In your case, pause() waits for a signal to arrive, but since you are calling it from the signal handler, it is not going to work there (I think).
As to making the process sleep and resume on signals. Look at the page I linked above. The best way is to have the signal handlers simply set flags and have the main thread (i.e. in main() or in an event loop) react to these flags. As recommended by the page, use sigsuspend when SIGUSR2 is received to pause the process until SIGURS1 is received.
It's simple. Use the 'kill' system call-
void sig_handler(int sig) {
static int i = 1;
if(sig == SIGUSR2) {
pause();
}
else if(sig == SIGUSR1) {
kill(<pid of process to wake up>, sig);
// make sure that process with pid has registered for sig
}
}