is psutil.STATUS_DEAD status same as psutil.STATUS_ZOMBIE? - python-3.8

In python psutil module, I see two status for a process, psutil.STATUS_DEAD psutil.STATUS_ZOMBIE. Need to understand the difference in both. I am able to simulate Zombie process using 'kill -1' and 'kill -3' command, but not able to simulate Dead process.
Any thought here ?

They are different as defined in the 'psutil/_commom.py':
STATUS_ZOMBIE = "zombie"
STATUS_DEAD = "dead"
But if you search STATUS_DEAD in version 5.9.0, you may find they may be the same in some sense.
In 'psutil/_psbsd.py' shows that STATUS_DEAD is not used in any BSD platforms
# OPENBSD
# According to /usr/include/sys/proc.h SZOMB is unused.
# test_zombie_process() shows that SDEAD is the right
# equivalent. Also it appears there's no equivalent of
# psutil.STATUS_DEAD. SDEAD really means STATUS_ZOMBIE.
# cext.SZOMB: _common.STATUS_ZOMBIE,
cext.SDEAD: _common.STATUS_ZOMBIE,
cext.SZOMB: _common.STATUS_ZOMBIE,
In 'psutil/_pslinux.py' about Linux shows:
# See:
# https://github.com/torvalds/linux/blame/master/fs/proc/array.c
# ...and (TASK_* constants):
# https://github.com/torvalds/linux/blob/master/include/linux/sched.h
PROC_STATUSES = {
"R": _common.STATUS_RUNNING,
"S": _common.STATUS_SLEEPING,
"D": _common.STATUS_DISK_SLEEP,
"T": _common.STATUS_STOPPED,
"t": _common.STATUS_TRACING_STOP,
"Z": _common.STATUS_ZOMBIE,
"X": _common.STATUS_DEAD,
"x": _common.STATUS_DEAD,
"K": _common.STATUS_WAKE_KILL,
"W": _common.STATUS_WAKING,
"I": _common.STATUS_IDLE,
"P": _common.STATUS_PARKED,
}
The statuses corresponds to the linux's task states:
// https://github.com/torvalds/linux/blame/master/fs/proc/array.c
static const char * const task_state_array[] = {
/* states in TASK_REPORT: */
"R (running)", /* 0x00 */
"S (sleeping)", /* 0x01 */
"D (disk sleep)", /* 0x02 */
"T (stopped)", /* 0x04 */
"t (tracing stop)", /* 0x08 */
"X (dead)", /* 0x10 */
"Z (zombie)", /* 0x20 */
"P (parked)", /* 0x40 */
/* states beyond TASK_REPORT: */
"I (idle)", /* 0x80 */
};
Therefore, the difference between them in linux is clear.
What Is a “Zombie Process” on Linux?
In psutil, if you create one process p using psutil.Popen(), then kill it.
As long as the parent process isn't terminated or you don't call p.wait(), the process will always keep 'Zombie' status.

Related

How to set up a physical memory protection in riscv?

I am trying to write a small software that uses riscv PMP. I'm using SaxonSoc https://github.com/SpinalHDL/SaxonSoc. Which means, I have access to the hardware description and to simulation waves.
I am trying to understand why this small test is not working ? :
int main (){
volatile uint32_t * volatile mem=(uint32_t * volatile)( SYSTEM_RAM_A_CTRL+ 0x2000) ;
*mem=0x15;
main_println32x("mem :",*mem);
u32 new_pmpcfg0 =1<<7 ; //setting the L bit so restructions can be applied to M mode
new_pmpcfg0 =(new_pmpcfg0) | 3<<3 ; // A=3=NAPOT ; R=W=X=0
u32 new_pmpaddr0=(u32)( SYSTEM_RAM_A_CTRL+ 0x2000) ;
__asm__ volatile ("csrw pmpaddr0, %0"
: /* output: none */
: "r" (new_pmpaddr0) /* input : from register */
: /* clobbers: none */);
__asm__ volatile ("csrw pmpcfg0, %0"
: /* output: none */
: "r" (new_pmpcfg0) /* input : from register */
: /* clobbers: none */);
*mem=0x19; // I expect an exception here
main_println32x("mem :",*mem);
}
Simulation shows that the csr registers are configured correctly :
But unfortunately, I don't get the exception I'm waiting for.
Any idea about what I am missing here ?
My first thought would be that the pmpaddr is not set correctly. As you are using a NAPOT encoding, the start address and the region size is encoded as per Table3.10 in the RiscV privileged specs.
I would suggest to test it by setting pmpaddr ='1 (all ones) to cover the whole address space, so the address you are accessing will be matched against that pmp region. Then it should throw an exception as it violates the access permissions.

Can we have dirty data on l1 cache in gpu?

I've read some of the common write policies in the microarchitecture of GPUs. For most of the GPU the written policy is the same as the below picture (the picture is from the gpgpu-sim manual). based on the below picture I have a question. can we have dirty data on the l1 cache?
The L1 on some GPU architectures is a write-back cache for global accesses. Note that this topic varies by GPU architecture, e.g. for whether global activity is cached in L1.
Speaking generally, then, yes you can have dirty data. By this I mean that the data in the L1 cache is modified (compared to what is otherwise in global space or the L2 cache) and it has not yet been "flushed" or updated into the L2 cache. (You can also have "stale" data - data in the L1 that has not been modified, but is not consistent with the L2.)
We can create a simple proof point for this (dirty data).
The following code, when executed on a cc7.0 device (and probably some other archtectures as well) will not give the expected answer of 1024.
This is due to the fact that the L1, which is a separate entity per SM, is not immediately flushed to the L2. It therefore has "dirty data" by the above definition.
(The code is broken for this reason. Don't use this code. It's just a proof point.)
#include <iostream>
#include <cuda_runtime.h>
constexpr int num_blocks = 1024;
constexpr int num_threads = 32;
struct Lock {
int *locked;
Lock() {
int init = 0;
cudaMalloc(&locked, sizeof(int));
cudaMemcpy(locked, &init, sizeof(int), cudaMemcpyHostToDevice);
}
~Lock() {
if (locked) cudaFree(locked);
locked = NULL;
}
__device__ __forceinline__ void acquire_lock() {
while (atomicCAS(locked, 0, 1) != 0);
}
__device__ __forceinline__ void unlock() {
atomicExch(locked, 0);
}
};
__global__ void counter(Lock lock, int *total) {
if (threadIdx.x == 1) {
lock.acquire_lock();
*total = *total + 1;
// __threadfence(); uncomment this line to fix
lock.unlock();
}
}
int main() {
int *total_dev;
cudaMalloc(&total_dev, sizeof(int));
int total_host = 0;
cudaMemcpy(total_dev, &total_host, sizeof(int), cudaMemcpyHostToDevice);
{
Lock lock;
counter<<<num_blocks, num_threads>>>(lock, total_dev);
cudaDeviceSynchronize();
cudaMemcpy(&total_host, total_dev, sizeof(int), cudaMemcpyDeviceToHost);
std::cout << total_host << std::endl;
}
cudaFree(total_dev);
}
In case there is any further doubt about whether this is a proper proof (e.g. to dispel arguments about things being "optimized into a register" etc.) we can study the resultant sass code. The end of the above kernel has code that looks like this:
/*0130*/ LDG.E.SYS R0, [R4] ; /* 0x0000000004007381 */
// load *total /* 0x000ea400001ee900 */
/*0140*/ IADD3 R7, R0, 0x1, RZ ; /* 0x0000000100077810 */
// add 1 /* 0x004fd00007ffe0ff */
/*0150*/ STG.E.SYS [R4], R7 ; /* 0x0000000704007386 */
// store *total /* 0x000fe8000010e900 */
/*0160*/ ATOMG.E.EXCH.STRONG.GPU PT, RZ, [R2], RZ ; /* 0x000000ff02ff73a8 */
//lock.unlock /* 0x000fe200041f41ff */
/*0170*/ EXIT ;
Since the result register has definitely been stored to the global space, we can infer that if another thread (in another SM) reads an unexpected value in global space for *total it must be due to the fact that the store from another SM has not reached the L2, i.e. has not reached device-wide consistency/coherency. Therefore the data in some other SM is "dirty". We can (presumably) rule out the "stale" case here (the data in the other L1 was written, but I have "old" data in my L1) because the global load indicated above does not happen until the lock is acquired in the SM.
Note that the above code "fails" on cc7.0 devices (and probably some other device architectures). It does not necessarily fail on the GPU you are using. But it is still "broken".

LPC824 microcontroller ADC demo HardFault problem

I'm trying to program LPC824 microcontroller board ([https://www.switch-science.com/catalog/2265/][1]) with LPCOpen.
I'm using it with LPCLink 2 debugger board.
My goal is to get some information from the "pressure sensor" with an ADC.
My code stops with a HardFault when executing a NVIC_EnableIRQ function(on line: 92).
If I don't use "NVIC interrupt controller" then my code works and I can get value from sensor with ADC.
What I am doing wrong?
Here is my adc.c code:
#include "board.h"
static volatile int ticks;
static bool sequenceComplete = false;
static bool thresholdCrossed = false;
#define TICKRATE_HZ (100) /* 100 ticks per second */
#define BOARD_ADC_CH 2
/**
* #brief Handle interrupt from ADC sequencer A
* #return Nothing
*/
void ADC_SEQA_IRQHandler(void) {
uint32_t pending;
/* Get pending interrupts */
pending = Chip_ADC_GetFlags(LPC_ADC);
/* Sequence A completion interrupt */
if (pending & ADC_FLAGS_SEQA_INT_MASK) {
sequenceComplete = true;
}
/* Threshold crossing interrupt on ADC input channel */
if (pending & ADC_FLAGS_THCMP_MASK(BOARD_ADC_CH)) {
thresholdCrossed = true;
}
/* Clear any pending interrupts */
Chip_ADC_ClearFlags(LPC_ADC, pending);
}
/**
* #brief Handle interrupt from SysTick timer
* #return Nothing
*/
void SysTick_Handler(void) {
static uint32_t count;
/* Every 1/2 second */
if (count++ == TICKRATE_HZ / 2) {
count = 0;
Chip_ADC_StartSequencer(LPC_ADC, ADC_SEQA_IDX);
}
}
/**
* #brief main routine for ADC example
* #return Function should not exit
*/
int main(void) {
uint32_t rawSample;
int j;
SystemCoreClockUpdate();
Board_Init();
/* Setup ADC for 12-bit mode and normal power */
Chip_ADC_Init(LPC_ADC, 0);
Chip_ADC_Init(LPC_ADC, ADC_CR_MODE10BIT);
/* Need to do a calibration after initialization and trim */
Chip_ADC_StartCalibration(LPC_ADC);
while (!(Chip_ADC_IsCalibrationDone(LPC_ADC))) {
}
/* Setup for maximum ADC clock rate using sycnchronous clocking */
Chip_ADC_SetClockRate(LPC_ADC, ADC_MAX_SAMPLE_RATE);
Chip_ADC_SetupSequencer(LPC_ADC, ADC_SEQA_IDX,
(ADC_SEQ_CTRL_CHANSEL(BOARD_ADC_CH) | ADC_SEQ_CTRL_MODE_EOS));
Chip_Clock_EnablePeriphClock(SYSCTL_CLOCK_SWM);
Chip_SWM_EnableFixedPin(SWM_FIXED_ADC2);
Chip_Clock_DisablePeriphClock(SYSCTL_CLOCK_SWM);
/* Setup threshold 0 low and high values to about 25% and 75% of max */
Chip_ADC_SetThrLowValue(LPC_ADC, 0, ((1 * 0xFFF) / 4));
Chip_ADC_SetThrHighValue(LPC_ADC, 0, ((3 * 0xFFF) / 4));
Chip_ADC_ClearFlags(LPC_ADC, Chip_ADC_GetFlags(LPC_ADC));
Chip_ADC_EnableInt(LPC_ADC,
(ADC_INTEN_SEQA_ENABLE | ADC_INTEN_OVRRUN_ENABLE));
Chip_ADC_SelectTH0Channels(LPC_ADC, ADC_THRSEL_CHAN_SEL_THR1(BOARD_ADC_CH));
Chip_ADC_SetThresholdInt(LPC_ADC, BOARD_ADC_CH, ADC_INTEN_THCMP_CROSSING);
/* Enable ADC NVIC interrupt */
NVIC_EnableIRQ(ADC_SEQA_IRQn);
Chip_ADC_EnableSequencer(LPC_ADC, ADC_SEQA_IDX);
SysTick_Config(SystemCoreClock / TICKRATE_HZ);
/* Endless loop */
while (1) {
/* Sleep until something happens */
__WFI();
if (thresholdCrossed) {
thresholdCrossed = false;
printf("********ADC threshold event********\r\n");
}
/* Is a conversion sequence complete? */
if (sequenceComplete) {
sequenceComplete = false;
/* Get raw sample data for channels 0-11 */
for (j = 0; j < 12; j++) {
rawSample = Chip_ADC_GetDataReg(LPC_ADC, j);
/* Show some ADC data */
if (rawSample & (ADC_DR_OVERRUN | ADC_SEQ_GDAT_DATAVALID)) {
printf("Chan: %d Val: %d\r\n", j, ADC_DR_RESULT(rawSample));
printf("Threshold range: 0x%x ",
ADC_DR_THCMPRANGE(rawSample));
printf("Threshold cross: 0x%x\r\n",
ADC_DR_THCMPCROSS(rawSample));
printf("Overrun: %s ",
(rawSample & ADC_DR_OVERRUN) ? "true" : "false");
printf("Data Valid: %s\r\n\r\n",
(rawSample & ADC_SEQ_GDAT_DATAVALID) ?
"true" : "false");
}
}
}
}
}
Hard fault usually means that you try to execute code outside allowed addresses. If you have not registered the interrupt in the vector table but enabled it, the MCU will jump to whatever address that's written there instead, after which the program crashes.
How to fix that depends on tool chain. Assuming LPCXpresso, you have several options to set up libraries (I don't know about LPCOpen specifically), so where to find the vector table is different from case to case. However, this works quite similar on most MCUs, ARM or not. Somewhere in a "crt start-up" file you should have something along the lines of this:
void (* const g_pfnVectors[])(void) = ...
This is an array of function pointers which will be the vector table allocated in memory at address 0 on Cortex M. You have to place your function at the relevant interrupt vector. For example it may say something like
PIN_INT0_IRQHandler, // PIO INT0
If that's the interrupt you should implement, then you replace that line:
#include "my_irq_stuff.h"
...
void (* const g_pfnVectors[])(void) =
...
my_INT0, // PIO INT0
Assuming my_irq_stuff.h contains the function prototype my_INT0 for the interrupt service routine. The actual routine should be implemented in the corresponding .c file.

Build up multi vxworks in vmware

when I built up one vxworks in vmware it works. But when I create more two vxworks seperately with different IP, the second vxworks fails with(log is from vxware.log):
2015-09-02T09:10:45.057+08:00| vcpu-0| W110: VLANCE: RDP OUT to unknown Register 100
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: VNET: MACVNetPort_SetPADR: Ethernet0: can't set PADR (0)
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: Msg_Post: Warning
2015-09-02T09:10:45.057+08:00| vcpu-0| I120: [msg.vnet.padrConflict] MAC address 00:0C:29:5A:23:AF of adapter Ethernet0 is within the reserved address range or is in use by another virtual adapter on your system. Adapter Ethernet0 may not have network connectivity.
I am sure each vxworks OS got its own MAC address. Another point is that I created the second vxworks through copying the files from the first one.
Forgive me.
Remove the macro VXWORKS_RUN_ON_VMWARE and any related code in sysLn97xEnd.c.
Everything works perfectly under VMWorkstation 11.
MAC can be set under vm machine's config page.
Maybe those macro is for the far previous version of vmworkstation.
setting mac address in vmware does not work.
you need a function to generate different mac address while system booting.
each copy of the vm machines will need to build a differenet bootrom and a vxworks.
(simplely use -D MACRO in (.wpj)MAKEFILE to switch macs between different projects with a single header.)
here is a dirty solution for setting multi macs in one vm machine:
0.
define the mac addresses and a function to access it in ln97xEnd.c.
\#define LN97_MAX_IP (4)
int ln97EndLoaded = 0;
char ln97DefineAddr[LN97_MAX_IP][6] = {
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa0},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa1},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa2},
{0x00, 0x0c, 0x29, 0x5a, 0x23, 0xa3}
};
END_OBJ * ln97xEndList[LN97_MAX_IP] = {NULL, NULL, NULL, NULL};
char * ln97xFindDefinedAddr(LN_97X_DRV_CTRL * pDrvCtrl)
{
int i;
for (i = 0; i endObj)
{
return ln97DefineAddr[i];
}
}
if (i
1.
Modify ln97xEndLoad() in ln97xEnd.c to init different mac (and store the END_OBJ* if needed).
END_OBJ * ln97xEndLoad
...
DRV_LOG (DRV_DEBUG_LOAD, "Done loading ln97x...\n", 1, 2, 3, 4, 5, 6);
/** add to save END_OBJ* */
if (ln97EndLoaded endObj;
ln97EndLoaded++;
}
/** end add */
return (&pDrvCtrl->endObj);
...
2.
change sysLan97xEnetAddrGet() in sysLn97xEnd.c.
aprom should not be set by ln97xFindDefinedAddr() instead of "00:0C:29:5A:23:AF".
char * ln97xFindDefinedAddr(LN_97X_DRV_CTRL * pDrvCtrl);
...
STATUS sysLan97xEnetAddrGet
...
{
char * addrDef = NULL;
...
/* modify by frankzhou to support in VMware */
\#define VXWORKS_RUN_ON_VMWARE
\#ifndef VXWORKS_RUN_ON_VMWARE
/* check for ASCII 'W's at APROM bytes 14 and 15 */
if ((aprom [0xe] != 'W') || (aprom [0xf] != 'W'))
{
logMsg ("sysLn97xEnetAddrGet: W's not stored in aprom\n",
0, 1, 2, 3, 4, 5);
return ERROR;
}
\#endif
\#ifdef VXWORKS_RUN_ON_VMWARE
/** add by bonex for multi mac addr */
addrDef = ln97xFindDefinedAddr(pDrvCtrl);
if (addrDef == NULL)
{
aprom[0]=0x00;
aprom\[1]=0x0c;
aprom[2]=0x29;
aprom[3]=0x5a;
aprom[4]=0x23;
aprom[5]=0xaf;
}
else
{
bcopy (addrDef, aprom, 6);
}
/** end by bonex */
\#endif
/* end by frankzhou */
...
3.
rebuild the bootrom, and rebuild the vxworks too.
result:
[telnet to vmware and check arpShow][1]
[1]: https://i.stack.imgur.com/kR9Uy.jpg
This is due to the setting of MAC address in sysLn97xEnd.c. This must be modified and rebuid the bootrom and vxworks image for another vxworks node, or it will render the conflict.

Time CPU Used by Process

I've managed to implement the code on this listing to get a list of all the processes running and their IDs. What I need now is to extract how much time each process uses the CPU.
I've tried referring to the keys in the code, but when I try to print 'Ticks of CPU Time' I get a zero value for all of the processes. Plus, even if I did get a value I'm not sure if 'Ticks of CPU Time' is exactly what I'm looking for.
struct vmspace *p_vmspace; /* Address space. */
struct sigacts *p_sigacts; /* Signal actions, state (PROC ONLY). */
int p_flag; /* P_* flags. */
char p_stat; /* S* process status. */
pid_t p_pid; /* Process identifier. */
pid_t p_oppid; /* Save parent pid during ptrace. XXX */
int p_dupfd; /* Sideways return value from fdopen. XXX */
/* Mach related */
caddr_t user_stack; /* where user stack was allocated */
void *exit_thread; /* XXX Which thread is exiting? */
int p_debugger; /* allow to debug */
boolean_t sigwait; /* indication to suspend */
/* scheduling */
u_int p_estcpu; /* Time averaged value of p_cpticks. */
int p_cpticks; /* Ticks of cpu time. */
fixpt_t p_pctcpu; /* %cpu for this process during p_swtime */
void *p_wchan; /* Sleep address. */
char *p_wmesg; /* Reason for sleep. */
u_int p_swtime; /* Time swapped in or out. */
u_int p_slptime; /* Time since last blocked. */
struct itimerval p_realtimer; /* Alarm timer. */
struct timeval p_rtime; /* Real time. */
u_quad_t p_uticks; /* Statclock hits in user mode. */
u_quad_t p_sticks; /* Statclock hits in system mode. */
u_quad_t p_iticks; /* Statclock hits processing intr. */
int p_traceflag; /* Kernel trace points. */
struct vnode *p_tracep; /* Trace to vnode. */
int p_siglist; /* DEPRECATED */
struct vnode *p_textvp; /* Vnode of executable. */
int p_holdcnt; /* If non-zero, don't swap. */
sigset_t p_sigmask; /* DEPRECATED. */
sigset_t p_sigignore; /* Signals being ignored. */
sigset_t p_sigcatch; /* Signals being caught by user. */
u_char p_priority; /* Process priority. */
u_char p_usrpri; /* User-priority based on p_cpu and p_nice. */
char p_nice; /* Process "nice" value. */
char p_comm[MAXCOMLEN+1];
struct pgrp *p_pgrp; /* Pointer to process group. */
struct user *p_addr; /* Kernel virtual addr of u-area (PROC ONLY). */
u_short p_xstat; /* Exit status for wait; also stop signal. */
u_short p_acflag; /* Accounting flags. */
struct rusage *p_ru; /* Exit information. XXX */
In fact I've also tried to print Time averaged value of p_cpticks and a few others and never got interesting values. Here is my code which is printing the information retrieved (I got it from cocoabuilder.com) :
- (NSDictionary *) getProcessList {
NSMutableDictionary *ProcList = [[NSMutableDictionary alloc] init];
kinfo_proc *mylist;
size_t mycount = 0;
mylist = (kinfo_proc *)malloc(sizeof(kinfo_proc));
GetBSDProcessList(&mylist, &mycount);
printf("There are %d processes.\n", (int)mycount);
NSLog(#" = = = = = = = = = = = = = = =");
int k;
for(k = 0; k < mycount; k++) {
kinfo_proc *proc = NULL;
proc = &mylist[k];
// NSString *processName = [NSString stringWithFormat: #"%s",proc->kp_proc.p_comm];
//[ ProcList setObject: processName forKey: processName ];
// [ ProcList setObject: proc->kp_proc.p_pid forKey: processName];
// printf("ID: %d - NAME: %s\n", proc->kp_proc.p_pid, proc->kp_proc.p_comm);
printf("ID: %d - NAME: %s CPU TIME: %d \n", proc->kp_proc.p_pid, proc->kp_proc.p_comm, proc->kp_proc.p_pid );
// Right click on p_comm and select 'jump to definition' to find other values.
}
free(mylist);
return [ProcList autorelease];
}
Thanks!
EDIT: I've just offered a bounty for this question. What I'm looking for specifically is to get the amount of time each process spends in CPU.
If, in addition to this, you can give %CPU being used by a process, that would be fantastic.
The code should be optimal in that it will be called every second and the method will be called on all running processes. Objective-C preferable.
Thanks again!
EDIT 2
Also, any comments as to why people are ignoring this question would also be helpful :)
Have a look at the Darwin source for libtop.c and particularly the libtop_pinfo_update_cpu_usage() function. Note that:
You'll need a basic understanding of Mach programming fundamentals to make sense of this code, as it uses task ports, etc.
If you want to simply use libtop, you'll have to download the source and compile it yourself.
Your process will need privileges to get at the task ports for other processes.
If all this sounds rather daunting, well… There is a way that uses less esoteric APIs: Just spawn a top process and parse its standard output. A quick glimpse over the top(1) man page turned up this little gem:
$ top -s 1 -l 3600 -stats pid,cpu,time
That is, sample once per second for 3600 seconds (one hour), and output to stdout in log form only the statistics for pid, cpu usage, and time.
Spawning and managing the child top process and then parsing its output are all straightforward Unix programming exercises.
Have you taken a look at the struct rusage? You have listed it and commented as "Exit information" but I know that it contains the resources actually used by a process. Take a look at this page. I remember I used getrusage() for calculating the exact amount of CPU time used in my scientific calculation for my current process, so you just have to know how to query that struct for each process in you list i guess