We are having some problems on installing our app on machines rather than upgrading. I am getting crashes in creating Performance Counts. Do an application have to be run as an Administrator for Performance Counters to work? And where are Performance Counters stored? I do not seem to be able to find them in the registry.
Thanks,
Doug
I am not sure I understand your question correctly (installing rather than upgrading?), also what kind of crash are you getting? What architecture are we talking about? OS? What instructions and performance counters are you looking at? I will assume Intel (IA-32).
The rough answer would be yes: depending on what performance counters you are accessing, and what you want to do with them, you might need to be running as an administrator.
The Intel processor architecture has a protection system that controls the access
of programs running on the processor to its resources (memory regions, registers, ports, etc.), implemented through privilege levels giving levels of access to these resources. The IA-32 architecture has four privilege levels (0 to 3). The highest privilege is at level zero, the lowest at level 3. Modern operating systems use level 0 for the operating system kernel, making level 0 also known as Kernel Mode in Linux (or Ring 0 in Windows OS). The level 3 is used for less critical programs. This is the level at which user space operates and where most application programs execute.
Depending on the configuration of both programmable and fixed counters, events can be accessed at the different privilege levels of user, kernel, or user+kernel. This is particularly useful to isolate measurements of a given piece of code from procedures run by the kernel.
In the case of performance counters, there are two available modes to access them: user
mode or kernel mode. Depending on the right of access, two instructions allow the reading of counters, RDPMC and RDMSR. The counters are configured by writing to the
corresponding MSR with the instruction WRMSR. This instruction can only be
executed at privilege level 0.
RDMSR can only be executed at privilege level 0, so users need an interface if they want
to read counters. RDPMC can be set to execute at any privilege level without the need
to enter the kernel (however setting it up requires privilege level 0). Note that this refers to the access to counters and not to what the counters are reading, which is set-up during configuration of the counters to read events that occur at user or kernel (or both) levels.
You can check the manuals for the architecture you are using, for Intel you can find them here
Related
This may be regarded as a naive question.
I'm used to bare-metal programming where I changed register values manually in order to write in the GPIO. Conversely, I read those same registers when needing information.
I've recently moved to embedded linux. I've remarked that now dealing with the GPIO cannot be done from code running in the user space.** I can imagine there it might be some security/sanity reason for this but I cannot see it. Why can't code from user space read/write in the GPIO? An example on a problem that could be caused by that would be great.
** I am aware of libraries/APIs that enable you to deal with the GPIO from user-space, and I am learning to use them. My question is pure out of curiosity.
On some platforms it can be, but it's usually avoided.
Typically Linux runs on hardware with an MMU providing both page-level memory protection, and remapping of a virtual address space to physical addresses.
To access a memory-mapped GPIO from userpace, you'd need to configure the MMU to map the register hardware address into the desired process's virtual address space, and you'd need to enable read and/or write access to that page.
The problem though is that the granularity is typically poor - a memory page may be something like 4 kilobytes, while a GPIO pin's behavior is governed by a few distinct bits in several different registers. So it's not possible to expose an individual pin to a given process.
Additionally, doing this from userspace would require knowing the precise hardware details of how GPIO works on a given platform, and that's information which usually better belongs in a driver.
There are a few cases where using the sysfs interface is too slow, for example trying to bit-bang some slower interfaces. But typically in those cases rather than trying to handle the GPIO directly from userspace, a kernel module is written which does the bit-banging from kernel space, and then userspace uses a syscall to pass entire mid- to high- level operation requests to the kernel.
How does the operating system manage the program permissions? If you are writing a low-level program without using any system calls, directly controlling the processor, then how does an operating system put stakes if the program directly controls the processor?
Edit
It seems that my question is not very clear, I apologize, I can not speak English well and I use the translator. Anyway, what I wonder is how an operating system manages the permissions of the programs (for example the root user etc ...). If a program is written to really low-level without making system calls, then can it have full access to the processor? If you want to say that it can do everything you want and as a result the various users / permissions that the operating system does not have much importance. However, from the first answer I received I read that you can not make useful programs that work without system calls, so a program can not interact directly with a hardware (I mean how the bios interacts with the hardware for example)?
Depends on the OS. Something like MS-DOS that is barely an OS and doesn't stop a program from taking over the whole machine essentially lets programs run with kernel privilege.
An any OS with memory-protection that tries to keep separate processes from stepping on each other, the kernel doesn't allow user-space processes to talk directly to I/O hardware.
A privileged user-space process might be allowed to memory-map video RAM and/or I/O registers of a device into its own address space, and basically act like a device driver. (e.g. an X server under Linux.)
1) It is imposible to have a program that that does not make any system calls.
2) Instructions that control the operation of the processor must be executed in kernel mode.
3) The only way to get into kernel mode is through an exception (including system calls). By controlling how exceptions are handled, the operating system prevents malicious access.
If a program is written to really low-level without making system calls, then can it have full access to the processor?
On a modern system this is impossible. A system call is going to be made in the background whether you like it or not.
I wanted to know exactly whose responsibility is it to set the mode bits during system calls to the kernel.
Does the job scheduler manage these bits, or is the whole Process Status Word (PSW) a part of the Process Control Block?
Or is it the responsibility of the interrupt handler to do this? If so, how does the Interrupt Service routine (being a routine itself) get to perform such a privileged task and not any other user routine? What if some user process tries to address the PSW ?Is the behavior different for different Operating Systems?
Alot of the protection mechanisms you ask about are architecture specific. I believe that the Process Status Word refers to an IBM architecture, but I am not certain. I don't know specifically how the Process Status Word is used in that architecture
I can, however, give you an example of how this is done in the case of x86. In x86, privileged instructions can only be executed on ring 0, which is what the interrupt handlers and other kernel code execute in.
The way the CPU knows whether code is in kernel space or user space is via protection bits set on that particular page in the virtual memory system. That means when a process is created, certain areas of memory are marked as being user code and other areas, where the kernel is mapped to, is marked as being kernel code, so the processor knows whether code being executed should have privileged access based on where it is in the virtual memory space. Since only the kernel can modify this space, user code is unable to execute privileged instructions.
The Process Control Block is not architecture specific, which means that it is entirely up to the operating system to determine how it is used to set up privileges and such. One thing is for certain, however, the CPU does not read the Process Control Block as it exists in the operating system. Some architectures, however, could have their own process control mechanism built in, but this is not strictly necessary. On x86, the Process Control Block would be used to know what sort of system calls the process can make, as well as virtual memory mappings which tell the CPU it's privilege level.
While different architectures have different mechanisms for protecting user code, they all share many common attributes in that when kernel code is executed via a system call, the system knows that only the code in that particular location can be privileged.
A colleague of mine is getting a quote from a software developer that involves serial communication, and in their quote, the developer makes the the following statement:
...Windows 7 operating system, which uses a non-real-time serial communication setup.
Is it true that Windows 7 does not support real-time serial communication? To clarify on what it is meant by "real-time," the project deals with robot automation and any delays in communication (such as from buffering) could cause damage to the product, or stop the production line. I can not find any evidence to either support or deny this claim. I don't believe it to be true though, and I think it probably has more to do with them using VB.Net for development.
The 'real-time' term used here does not actually refer to anything in the serial communications bus.
However, it does have to do with the fact that the windows multitasking scheduler is not designed to allow for realtime tasks which have hard deadlines.
See this question for some info Why is Windows not considered suitable for real time systems/high performance servers?
Lets pretend you have a particle accelerator hooked up to your computer and you have to ensure that every 10 microseconds the magnet train switches to power the next set of cells but windows decides that it's time to apply some Windows Update patches. Your photon stream wouldn't get redirected properly and could cause damage to the system.
It is a fairly nonsensical statement, Windows itself is not a real-time operating system. It cannot provide hard guarantees that user mode code is going to respond quick enough. Other than thread scheduling delays, a simple mishap like getting the pages of the process swapped to the paging file is enough to cause arbitrary delays in getting it running again. An attribute of any demand-paged virtual memory operating system. So of course a "serial communications setup" cannot be either, assuming you are not contemplating writing ring 0 kernel code. Nobody does.
It is not a practical problem, the only point of using a serial port is to talk to the controller for the robot. Which provides the real-time guarantee.
You could only get in trouble when you command the robot to make an unrestricted move and use an external sensor to get it to stop. Not uncommon when you need to find an object whose location you don't know. A decent controller knows how to do that, avoid implementing it in your Windows code. Solid overtravel protection built into the robot itself that triggers an e-stop is necessary anyway, you can't trust that sensor either.
No, Windows 7 (and in fact all of the mainstream Windows releases) are not Real-time operating systems. To clarify what is meant by a real-time operating system:
A real-time operating system (RTOS) is an operating system (OS)
intended to serve real-time application requests. It must be able to
process data as it comes in, typically without buffering delays.
Processing time requirements (including any OS delay) are measured in
tenths of seconds or shorter.
A key characteristic of an RTOS is the level of its consistency
concerning the amount of time it takes to accept and complete an
application's task; the variability is jitter.[1] A hard real-time
operating system has less jitter than a soft real-time operating
system. The chief design goal is not high throughput, but rather a
guarantee of a soft or hard performance category. An RTOS that can
usually or generally meet a deadline is a soft real-time OS, but if it
can meet a deadline deterministically it is a hard real-time OS. [2]
An RTOS has an advanced algorithm for scheduling. Scheduler
flexibility enables a wider, computer-system orchestration of process
priorities, but a real-time OS is more frequently dedicated to a
narrow set of applications. Key factors in a real-time OS are minimal
interrupt latency and minimal thread switching latency; a real-time OS
is valued more for how quickly or how predictably it can respond than
for the amount of work it can perform in a given period of time.[3]
Note that most of the time real-time operating systems are less efficient (i.e. have lower throughput), which is why none of the mainstream operating systems are real-time (e.g. real-time editions of Linux use completely different kernels) - its only worth it in cases where timing at a very precise level is absolutely critical.
Windows CE is a real-time operating system Real-Time Systems with Microsoft Windows CE 2.1
I basically wanted to know what exactly a virtual processor is. At IBM's site they define it as:
"A virtual processor is a representation of a physical processor core to the operating system of a logical partition that uses shared processors. "
I understand that if there are x processors, each of which can simultaneously perform two operations, then the system can perform 2x operations simultaneously. But where does virtual processor fit into this. And i tried looking up the difference between a logical partition and other partitions such as primary but wasn't really sure.
I'd like to draw an analogy between virtual memory and virtual processors.
Start with expectations:
A user program is written against a set of expectation about what the memory looks like (an a nice flat, large, continuous memory model is the best...)
An OS system is written against a set of expectation of how the hardware performs (what CPU protection modes operation are available, how interrupts arrive and are blocked and handled, how to talk to IO devices, etc...)
Realize that expectation can be met directly by the hardware, or by an abstraction layer
Virtual memory is a set of (specialized, not found in simple chips) hardware tools and OS services that fake a user program into thinking that it has that nice, flat, large, continuous memory space, even while the OS is busily dividing the real memory into little piece, and storing some of them on disk, bringing other back, and otherwise making a real hash of it. But your code doesn't care. Everything just works.
A virtual processor system is a set of (specialized, not found in consumer CPUs) hardware tools and hypervisor services that allow your OS to believe it has direct access to one or more processors with the expected protection modes, interrupts, etc. even though the hypervisor is busily swapping whole OS contexts onto and off of one or more real processors, starting and stopping access to IO busses, and so on and so forth. But the OS doesn't care. Everything just works.
The hardware support to do this is has only recently started to be available in "desktop" CPUs, but Big Iron has had it for ages. It is useful for a couple of reasons
Protection. In a properly protected OS, it is tough for one processes or user to spy on another. But since they can be resident in the same context, it may still be possible. Virtualizing OSs divides them by another, even thinner channel and makes it that much harder for data to leak, and malicious things to be done.
Robustness. If you can swap OS contexts in and out you migrate them from one machine to anther and checkpoint and restart. Which allows for computers that detect failures on their own processors and recover gracefully.
These are the things (aside from millions of LOC of heavily debugged, mission critical code) that have kept people paying for Big Iron.