Managed Server State Integer Value Reference - weblogic

I'm using a JMX agent to inspect the state of managed servers in a weblogic domain. The only value I can extract with this agent is the StateVal, which is the integer value of the managed servers state. I can't seem to find a complete reference of the integer values of server states. Can anyone point me to a reference, or explain why this isn't a simple request? For instance, here is what I am looking for...
State Value
----------------
SHUTDOWN ?
STARTING ?
RUNNING 2
FAILED ?
...
I'm working with version 12.1.

Here are the values you are looking for :
SHUTDOWN = 0;
STARTING = 1;
RUNNING = 2;
STANDBY = 3;
SUSPENDING = 4;
FORCE_SUSPENDING = 5;
RESUMING = 6;
SHUTTING_DOWN = 7;
FAILED = 8;
UNKNOWN = 9;
SHUTDOWN_PENDING = 10;
SHUTDOWN_IN_PROCESS = 11;
FAILED_RESTARTING = 12;
ACTIVATE_LATER = 13;
FAILED_NOT_RESTARTABLE = 14;
FAILED_MIGRATABLE = 15;
DISCOVERED = 16;
ADMIN = 17;
FORCE_SHUTTING_DOWN = 18;
I didn't find it in the weblogic documentation but finally I found the answer on the Oracle community
It's for weblogic 11G but I guess the same values are used for weblogic 12C.

Related

'proc' undefined when trying to add a system call to xv6

I'm trying to add a "clone" system call to the xv6 os. The call creates a new kernel thread which shares the calling process’s address space. The following is my code in proc.c
int clone(void(*fcn)(void*), void* arg, void* stack)
{
int i, pid;
struct proc *np;
int *myarg;
int *myret;
if((np = allocproc()) == 0)
return -1;
np->pgdir = proc->pgdir; //Here's where it tell's me proc is undefined
np->sz = proc->sz;
np->parent = proc;
*np->tf = *proc->tf;
np->stack = stack;
np->tf->eax = 0;
np->tf->eip = (int)fcn;
myret = stack + 4096 - 2 * sizeof(int *);
*myret = 0xFFFFFFFF;
myarg = stack + 4096 - sizeof(int *);
*myarg = (int)arg;
np->tf->esp = (int)stack + PGSIZE - 2 * sizeof(int *);
np->tf->ebp = np->tf->esp;
np->isthread = 1;
for(i = 0; i < NOFILE; i++)
if(proc->ofile[i])
np->ofile[i] = filedup(proc->ofile[i]);
np->cwd = idup(proc->cwd);
safestrcpy(np->name, proc->name, sizeof(proc->name));
pid = np->pid;
acquire(&ptable.lock);
np->state = RUNNABLE;
release(&ptable.lock);
return pid;
}
Most of the implementations I found look just like this, however, whenever I try to make it tells me that 'proc' is undefined. Most implementations of clone that I've seen look nearly identical, with all of them utilizing proc. I'd be happy to share my sysproc.c code as well if that would help in any way.
Thank you!
This has nothing to do with your system call's implementation because proc global variable is being set by the scheduler right before resuming a selected "runnable" process.
The reason for null would probably be because calling this function from a wrong context.
A system call implementation is expected to be executed from a wrapping function named sys_mysysfunc that syscall function called due to a system call inerrupt initiated by a user application code.
Please share with us your entire implementation flow for additional assistance.

generating of mipmaps using vkCmdBlitImage for cubemap textures

what should be the parameters of VkImageBlit.dstOffsets and VkImageBlit.srcOffsets when we are doing dynamic generation of mipmaps?
I am doing layer by layer and for each mipmap level but somewhere it is going wrong, mostly i think offsets. So i have data which has all the six faces with 0th mipmap level.
for(int j=0; j< bufferCopyRegions.size(); j++) {
for (int32_t i = 1; i < mipLevels; i++)
{
VkImageBlit imageBlit{};
// Source
imageBlit.srcSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
imageBlit.srcSubresource.layerCount = 1;
imageBlit.srcSubresource.mipLevel = 0;
imageBlit.srcOffsets[1].x = bitmapInfos[j].width;
imageBlit.srcOffsets[1].y = bitmapInfos[j].height;
imageBlit.srcOffsets[1].z = 1;
// Destination
imageBlit.dstSubresource.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
imageBlit.dstSubresource.layerCount = 1;
imageBlit.dstSubresource.mipLevel = i;
imageBlit.dstOffsets[1].x = int32_t(bitmapInfos[j].width >> (i) == 0 ? 1 : int32_t(bitmapInfos[j].width >> (i )));
imageBlit.dstOffsets[1].y = int32_t(bitmapInfos[j].height >> (i) == 0 ? 1 : int32_t(bitmapInfos[j].height >> (i)));
imageBlit.dstOffsets[1].z = 1;
VkImageMemoryBarrier imageMemoryBarrier = {};
imageMemoryBarrier.sType = VK_STRUCTURE_TYPE_IMAGE_MEMORY_BARRIER;
imageMemoryBarrier.pNext = NULL;
imageMemoryBarrier.subresourceRange.aspectMask = VK_IMAGE_ASPECT_COLOR_BIT;
imageMemoryBarrier.subresourceRange.baseMipLevel = i;
imageMemoryBarrier.subresourceRange.levelCount = 1;
imageMemoryBarrier.subresourceRange.baseArrayLayer = j;
imageMemoryBarrier.subresourceRange.layerCount = 1;
// change layout of current mip level to transfer dest
setImageLayout(imageMemoryBarrier,
blitCmd,
image,
VK_IMAGE_ASPECT_COLOR_BIT,
VK_IMAGE_LAYOUT_UNDEFINED,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, imageMemoryBarrier.subresourceRange,
VK_PIPELINE_STAGE_TRANSFER_BIT,
VK_PIPELINE_STAGE_HOST_BIT);
// Do blit operation from previous mip level
vkCmdBlitImage(blitCmd, image, VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, image,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL, 1, &imageBlit, VK_FILTER_LINEAR);
setImageLayout(imageMemoryBarrier, blitCmd, image, VK_IMAGE_ASPECT_COLOR_BIT,
VK_IMAGE_LAYOUT_TRANSFER_DST_OPTIMAL,
VK_IMAGE_LAYOUT_TRANSFER_SRC_OPTIMAL, imageMemoryBarrier.subresourceRange,
VK_PIPELINE_STAGE_HOST_BIT,
VK_PIPELINE_STAGE_TRANSFER_BIT);
}
}
I don't see baseArrayLayer of the imageBlit.srcSubresource and imageBlit.dstSubresource set to j. Which is probably your immediate problem.
Also your barriers seem bad to me. Only the top mip needs to be synchronized with host. But even so VK_PIPELINE_STAGE_HOST_BIT should not be necessary, because there is an exception for vkQueueSubmit saying it does this kind of synchronization implicitly if host writes ended before it being called (6.9. Host Write Ordering Guarantees and reminded in the Note in 6.1.3. Access Types).

pic Interrupt on change doesn't trigger first time

I'm trying to comunicate a pic18f24k50 with an arduino 101. I'm using two lines to establish a synchronized comunication. On every change from low to high of the first line, I read the value from the second line.
I have no problems with the arduino code, my problem is that in the pic, the Interrupt on change triggers on the second change from low to high instead of triggering on the first change. This only happens the first time I send data, after that, it works perfectly (it triggers on the first change and I receive the byte properly). Sorry for my english, I'll try to explain myself better with this image:
Channel 1 is the clock signal, channel2 is the data (Im sending one byte for the moment, with bit values 10101010). Channel 4 is an output I'm changing every time I process a bit. (as you can see, it begins on the second rise of the clock signal instead of the first one). This is captured on the first sent byte, the next ones works ok.
I post the relevant code in the pic here:
This is where I initialize things:
TRISCbits.TRISC6 = 0;
TRISCbits.TRISC1 = 1;
TRISCbits.TRISC2 = 1;
IOCC1 = 1;
ANSELCbits.ANSC2=0;
IOCC2 = 0;
INTCONbits.IOCIE = 1;
INTCONbits.IOCIF = 0;
And this is on the interrupt code:
void interrupt SYS_InterruptHigh(void)
{
if (INTCONbits.IOCIE==1 && INTCONbits.IOCIF==1)
{
readByte();
}
}
void readByte(void)
{
while(contaBits<8)
{
INTCONbits.IOCIE = 0;
INTCONbits.IOCIF = 0;
while (PORTCbits.RC1 != HIGH)
{
}
if (PORTCbits.RC1 == HIGH)
{
LATCbits.LATC6 = !LATCbits.LATC6;
//LATCbits.LATC6 = ~LATCbits.LATC6;
switch (contaBits)
{
case 0:
if (PORTCbits.RC2 == HIGH)
varByte.b0 = 1;
else
varByte.b0 = 0;
break;
case 1:
if (PORTCbits.RC2 == HIGH)
varByte.b1 = 1;
else
varByte.b1 = 0;
break;
case 2:
if (PORTCbits.RC2 == HIGH)
varByte.b2 = 1;
else
varByte.b2 = 0;
break;
case 3:
if (PORTCbits.RC2 == HIGH)
varByte.b3 = 1;
else
varByte.b3 = 0;
break;
case 4:
if (PORTCbits.RC2 == HIGH)
varByte.b4 = 1;
else
varByte.b4 = 0;
break;
case 5:
if (PORTCbits.RC2 == HIGH)
varByte.b5 = 1;
else
varByte.b5 = 0;
break;
case 6:
if (PORTCbits.RC2 == HIGH)
varByte.b6 = 1;
else
varByte.b6 = 0;
break;
case 7:
if (PORTCbits.RC2 == HIGH)
varByte.b7 = 1;
else
varByte.b7 = 0;
break;
}
contaBits++;
}
}//while(contaBits<8)
INTCONbits.IOCIE = 1;
contaBits=0;
}
LATCbits.LATC6 = !LATCbits.LATC6; <-- this is the line corresponding to channel 4.
RC1 is channel 1
and RC2 is channel 2
My question is what am I doing wrong, why on the first sent of bytes the interrupt doesn't triggers on the first change of the line 1?
Thank you.
What are you trying to achieve?
The communication protocol as you're describing is commonly refered to as SPI (Serial Peripheral Interface). You should use the hardware implementation available by PIC/Microchip, if possible, for best performance.
Keep your code documented/formatted/logic
I noticed your code being a little, weird.
Weird code gives weird errors.
First:
You're using interrupts, you have this blocking code in your interrupt. Which isn't nice at all.
Second:
Your "contabits" and "varByte" variable come out of nowhere, probably they're global. Which might not be the best practice.
Though, if you're using interrupts, and when it might make sense to transfer the variables to your main program by using a global. The global should be volatile.
Third:
That case switch is just 8x the same code, but a little different.
Also, add some comments to your code.
To be honest
I didn't spot your actual error, has been a long time since I worked with PIC.
But, you should try hardware SPI.
Below is my attempt at software SPI as you described, it might be a little more logical, and takes out any annoyance of the interrupts.
I would recommend replacing the while(CLOCK_PIN_MACRO != 1){}; with for(unsigned int timeout = 64000; timeout > 0; timeout--){};.
So that you can be sure your program won't hang while waiting for an SPI signal that won't come. (Make the timeout something what fits your needs)
#define CLOCK_PIN_MACRO PORTCbits.RC1
#define DATA_PIN_MACRO PORTCbits.RC2
/*
Test, without interrupts.
I would strongly advise to use Hardware SPI or non-blocking code.
*/
unsigned char readByte(void){
unsigned char returnByte = 0;
for(int i = 0; i < 8; i++){ //Do this for bit 0 to 7, to get a complete byte.
while(CLOCK_PIN_MACRO != 1){}; //Wait until the Clock pin goes high.
if(DATA_PIN_MACRO == 1) //Check the data pin.
returnByte = returnByte & (1 << i); //Shift the bit in our byte. (https://www.cs.umd.edu/class/sum2003/cmsc311/Notes/BitOp/bitshift.html)
while(CLOCK_PIN_MACRO == 1){}; //Wait untill the Clock pin goes low again.
}
return returnByte;
}
void setup(){
TRISCbits.TRISC6 = 0;
TRISCbits.TRISC1 = 1;
TRISCbits.TRISC2 = 1;
//IOCC1 = 1;
ANSELCbits.ANSC2=0;
//IOCC2 = 0;
INTCONbits.IOCIE = 1;
INTCONbits.IOCIF = 0;
}
void run(){
unsigned char theByte = readByte();
}
main(){
setup();
while(1){
run();
}
}
You can find the following from the datasheet.
The pins are compared with the old value latched on the last read of
PORTB. The "mismatch" outputs of RB7:RB4 are ORed together to generate
the RB Port Change Interrupt with Flag bit,RBIF (INTCON<0>).
This means that when you read PORTB RB7:RB4 pins, the value of each pin will be stored in an internal latch. Interrupt will be generated only when there is any change in the pin input from the previously latched value.
Then, what will be the default initial value in the latch? i.e. Before we read anything from PORTB.
What ever it is. in your case the default latch value is different from the logic state of your input signal. During the next transition of the signal the latch value and signal level becomes same. So no intterrupt is generated first time.
What you can do is setting the latch value to the initial state of your signal. This can be done by reading PORTB (should be done before enabling the intterrupt).
The below code is enough to simply read the register.
unsigned char ch;
ch = PORTB;
Hope this will help.

Retrieve and sort results based on several criteria

Here's the scenario...
I have several float attributes in my data model which I want to compare against a number of variables (actually the same attributes in another object) and return if any match... straight forward NSPredicate.
However... what I would like to do is to keep track of which of those comparisons evaluate to true and then do a count. I then want to return only the top X results, i.e. those where the most comparisons are true.
Example... (not actual code!!)
object1.float1 = 1;
object1.float2 = 2;
object1.float3 = 3;
object2.float1 = 1;
object2.float2 = 2;
object2.float3 = 4;
object3.float1 = 1;
object3.float2 = 4;
object3.float3 = 4;
float1Variable = 1;
float2Variable = 2;
float3Variable = 3;
kReturnedObjects = 2;
I only want to retrieve object1 and object2.
Any help would be much appreciated, most of my possible solutions so far are incredibly laborious!
What I'd do is define some kind of evaluation-function to calculate the "similarity" of the objects. I.e
s(obj, comparison_obj) := (obj.var1 == comparison_obj.var1) + (obj.var2 == comparison_obj.var2) + (obj.var3 == comparison_obj.var3)
The larger s(obj, comparison_obj) is, the more variables are the same. You can then use Core Data to give you the list of kReturnedObjects objects, sorted by s descending.

AVCodecContext settings for H264 (1080i)

I'm trying to configure x264 for 1080i capturing. Most of these settings below are found in different examples. However, compiled together they don't work. ffmpeg API reports no error, but avcodec_encode_video() always returns zero.
Some of the numbers are strange to me... for example, gop_size. Isn't 250 too high?
Event you can't offer the final answer, I'm still interested in any kind of comment on this subject.
pCodecContext->codec_type = AVMEDIA_TYPE_VIDEO;
pCodecContext->codec_id = CODEC_ID_H264;
pCodecContext->coder_type = FF_CODER_TYPE_AC;
pCodecContext->flags |= CODEC_FLAG_LOOP_FILTER | CODEC_FLAG_INTERLACED_ME | CODEC_FLAG_INTERLACED_DCT;
pCodecContext->me_cmp |= 1;
pCodecContext->partitions |= X264_PART_I8X8 | X264_PART_I4X4 | X264_PART_P8X8 | X264_PART_B8X8;
pCodecContext->me_method = ME_UMH;
pCodecContext->me_subpel_quality = 8;
pCodecContext->me_range = 16;
pCodecContext->bit_rate = 10 * 1024 * 1024; // 10 Mbps??
pCodecContext->width = 1920;
pCodecContext->height = 1080;
pCodecContext->time_base.num = 1; // 25 fps
pCodecContext->time_base.den = 25; // 25 fps
pCodecContext->gop_size = 250; // 250
pCodecContext->keyint_min = 25;
pCodecContext->scenechange_threshold = 40;
pCodecContext->i_quant_factor = 0.71f;
pCodecContext->b_frame_strategy = 1;
pCodecContext->qcompress = 0.6f;
pCodecContext->qmin = 10;
pCodecContext->qmax = 51;
pCodecContext->max_qdiff = 4;
pCodecContext->max_b_frames = 3;
pCodecContext->refs = 4;
pCodecContext->directpred = 3;
pCodecContext->trellis = 1;
pCodecContext->flags2 |= CODEC_FLAG2_WPRED | CODEC_FLAG2_MIXED_REFS | CODEC_FLAG2_8X8DCT | CODEC_FLAG2_FASTPSKIP; // wpred+mixed_refs+dct8x8+fastpskip
pCodecContext->weighted_p_pred = 2; // not implemented with interlaced ??
pCodecContext->crf = 22;
pCodecContext->pix_fmt = PIX_FMT_YUV420P;
pCodecContext->thread_count = 0;
You could analyze some existing 1080i h264 video files to see their parameters.
I found useful for me that links:
http://www.cardinalpeak.com/blog/?p=878
http://sourceforge.net/projects/h264bitstream/
You should strive to avoid setting any x264 options yourself; the library itself knows best and you'll only get poor tunings from reading old source code. Use the AVOption API to set the "preset"/"tune"/"profile" options on the encoder to what you need (see x264 --help).