I found some function 'renice' that changes the nice value of process.
But I want to know how to change priority in kernel code.
Is it okay that just changing the priority value in sched_entity of process?
If you want to change the niceness of the process programmatically, I would advise against setting these values in the kernel struct directly. Instead, you can utilize several POSIX functions such as setpriority or pthread_setschedparam.
The default scheduler policy on Linux is SCHED_OTHER, so you're, by default, achieving the same thing using these two functions, as SCHED_OTHER just uses niceness level to schedule.
If you have access to the task_struct, in order to achieve this directly, you just need to set static_prio in task_struct.
Related
Just implemented a custom cache store based on the official existing RocksDb one for a different backend store.
That leads me to a number of concerns/questions:
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
There should be a way to have it run the annotation processor. We use IntelliJ and it works fine OOTB.
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
This is just a side effect of having a parameterized test like that. In actual runtime it won't be invoked. You can just have the test ignore it if it is segmented.
if (segmented) return;
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Infinispan data container is always segmented when running any of the cluster modes. The store however is not required to be segmented in those cases. If the store is not segmented you can ignore any segment parameters as documented.
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
You can see some at https://github.com/infinispan/infinispan/blob/main/persistence/jdbc/src/main/java/org/infinispan/persistence/jdbc/stringbased/JdbcStringBasedStore.java#L724. The methods can and will be invoked from different metehods, thus why it stores a Map keyed by the Transaction.
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Not sure I understand the question. For your store you shouldn't need any compatibility test, so just don't copy that test file.
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
We have https://github.com/infinispan/infinispan-benchmarks/blob/main/cachestores/src/main/java/org/infinispan/jmhbenchmarks/InfinispanHolder.java that should work.
I am trying to create a synchronized usrp source block in gnu radio consisting of multiple B210 USRP devices. Lang: C++.
From what I have found I need to:
Instantiate multiple multi_usrp_sptr as each B210 requires one and multiple B210 devices cannot be addressed by using single sptr
Use external frequency and PPS sources - an option that can be selected from block or set programmatically
Synchronize re/tuning to achieve repeatable phase offset between nodes - this can be achieved using timed commands API https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD
Synchronize sample streams using time_spec property issue_stream cmd
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
I looked into the gr-uhd folder where the sink/source code resided and found functions that could be altered.
Unfortunately I don't know how to copy or export this library to do these modifications and later compile to insert my custom blocks to GNU Radio, because gr-uhd seems to be built in and compiled at GR installation.
I attempted coping and then making the lib but that's not the way - it didn't succeed. Should I add my own source block via gr_modtool and insert only the commands I need?
Compatibility with uhd and its functions apart from just adding a few lines would be advantageous not to write the source from scratch.
Please advise
Edit
Experimental flowchart, based on Marcus Müller suggestion:
Experimental usrp synchronization flow
The problem is how should I insert these timed commands and set time_spec of stream in GNU radio block or gr-uhd libs?
For a USRP sink: add tags containing dictionaries with the correct command times to the streams. The GNU Radio API docs have information on how these dictionaries need to look like. The time field is what you need to set with an appropriate value.
For a USRP source: Use the set_start_time on the uhd_usrp_source block; use the same dictionaries described above to issue commands like tuning, gain setting at a coordinated time.
I was trying to find a proper way of synchronizing the USRPs via tags.
There are a few issues I came across in this approach:
Timed commands require the knowledge of the current moment in time, which is done via usrp.get_time_now(), even though I would request the USRP to give the time through tags I would have to somehow extract it from the output. (make some kind of loop and proper triggering) (source: https://kb.ettus.com/Synchronizing_USRP_Events_Using_Timed_Commands_in_UHD) or maybe plan everything not in a relative way - using absolute values instead of offsets. I have seen an approach to regularly reset the sense of time each PPS (set it to 0.0) and maybe then setting time of commands within range of 0.0-1.0 would be acceptable. Then the loop for reading and inserting time into commands would also be redundant.
I didn't found a way to create dicts in GR via blocks to make the solution scalable (without writing a few lines of code in textbox) or writing OOT block
In the end there is so little information to tell what kind of solution is most appropriate (PDU, events, are tags still relevant in GR
?), and the docs are so very scarce, that after some mailing I decided to add a simple class that inherits from the main top_bock.py and after instantiation of top_block it calls a few functions to synchronize the devices. This kind of solution is not the most flexible one, and the parent class top_block.py has to be called through the inheriting one, but it enables an easy programming interface.
Soon I will add an example of the code used in inheriting class just in case.
If there is any more neat, dynamic or scalable solution please let me know or point me to sources.
Basically, how can I make sure that in my module, a specific process is current. I've looked at kick_process, but I'm not sure how to have my module execute in the context of that process once kicking it into kernel mode.
I found this related question, but it has no replies. I believe an answer to my question could help that asker as well.
Note: I am aware that if I want the task_struct of a process, I can look it up. I'm interested in running in a specific context since I want to call functions that reference current.
Best way i have found to do anything in the context of a particular process in the kernel, is to sleep in process context(wait_* family of functions) and wake up that thread and do whatever needs to be done in that context. This would ofcourse mean you would have to have the application call into the kernel via IOCTL or something and sleep on that thread and wake it up whenever you need to do something. This seems to be a very widely used and popular mechanism.
I want to use my Unix module in order to write to another process memory (I would like to do it in kernel mode and avoid the pthread interface).
I have to use function (like do_mmap(..), do_unmmap(..), sys_mprotect(..), etc.) which affect the current process memory instead of the process I'd like to it to affect.
So I figured, I need find a way to do a context switch to the process I want in order to make the process I want the current. I tried to copy the implementation of the schedule() with a minor change:
I replaced the line:
next = pick_next_task(rq);
with:
next = myNext;
My problem is that schedule requires so much structs and functions which I can't include, so I have to re-implement them. it seems pretty bad to do such a thing. Do you have any suggestions?
I want to avoid to modify the existing kernel, so I won't have to force the users to restart and modify their operating system in order to use my program (which is why I use modules).
By the way, I use the "2.6.38-11-generic" version of Linux.
Use the get_user_pages() function to get the pages of the target process (more precisely, its mm_struct)
Map the page(s) that you need via kmap() or kmap_atomic() (depending on the context)
Write/read at the address returned by the mapping (withing a page size).
Destroy the mapping via kunmap() or kunmap_atomic()
I need a way to store large data chunks(~1-2MB) at a high rate (~200-300Mbit/s).
After some research I found several options:
aio_write
Direct_IO
Carbon File Manager's PBWriteForkAsync()
default fwrite(), wrapped in a block and dispatched via GCD
NSData's appendData in an NSOperation
...
This wiki page describes the state of aio_write under Linux. What I didn't find was a similar page about the state of aio_write for Mac OS X.
NSOperation or Blocks+GCD seems to be a technique to achieve non-blocking IO. It is used in several open source IO libraries (e.g. https://github.com/mikeash/MAAsyncIO)
Has someone with a similar problem found a suitable solution?
Currently I tend towards PBWriteForkAsync as it takes some 'tuning'parameters. It also should be 64-bit safe.
I don't know MacOS very well, but I'd also try open and write syscalls from unistd.h with the non-blocking option O_NONBLOCK. reference
You should use unbuffered I/O for the writes, in Carbon this is FSWriteFork() with kFSNoCacheBit, in BSD use fcntl() with F_NOCACHE.
Rather than use the system's non-blocking IO, you may want to consider a worker thread to write the blocks sequentially using a queue. This will give you more control and may end up being simpler, particularly if you want to monitor the queue to see if you are keeping up or not.
See here for more information.