is it possible to create 2 different semaphore for guarding same resource in VxWorks? - vxworks

Please can anybody help me on this issue.
Is it possible for 2 different tasks to create 2 different semaphore for gaurding same critical resource in VxWorks?

This should be covered in the vxWorks documentation.
In general, you should decouple the creation of the semaphore from the use of the semaphore.
/**Initialization Code**/
SEM_ID semM;
semM = semMCreate (...);
...
taskSpawn(task1...);
taskSpawn(task2...);
...
/* Task 1 code */
void task1() {
...
semTake (semM, WAIT_FOREVER);
...<Task 1 critical section>
semGive (semM);
...
}
/* Task 2 code */
void task2() {
...
semTake (semM, WAIT_FOREVER);
...< Task 2 critical section>
semGive (semM);
...
}
In this instance, semM is a global variable available to both tasks.
If that offends you, with VxWorks 6.x you can also use the semOpen() API which gives semaphores names.
Do a semOpen in each task to create/retrieve the semaphore and it's ID.
When the mutex is created, it is available to the first task that does a semTake (be it task 1 or task 2).
If you need things to happen in a specific order then you need a combination of mutex (for mutual exclusion) and synchronization (via Binary semaphore for example).
Taking the above example a modifying a bit to make sure that task2 only runs once task1 has done something.
/**Initialization Code**/
SEM_ID semM;
SEM_ID semSync;
semM = semMCreate (...);
semSync = semBCreate (...);
...
void task1() {
...
/* Access Shared Structure */
semTake (semM, WAIT_FOREVER);
...<Task 1 critical section>
semGive (semM);
/* Notify Task 2 that something is available */
semGive (semSync);
...
}
void task2() {
...
/* Wait for Task 1 to let me know there is something to do */
semTake (semSync, WAIT_FOREVER);
/* Access Shared Structure */
semTake (semM, WAIT_FOREVER);
...< Task 2 critical section>
semGive (semM);
...
}

I think you missing here something,
in vxWorks, all tasks share the same memory space, therefore, you must protected global data between tasks with Mutex.
The idea is to use only one Mutex (that its sem_id is well known for all tasks that want to use it).
You can protect the data using one Mutex (there is no limitation about the number of tasks using the same Mutex).
if you do want to access the same memory space between 2 tasks in vxWorks, the best way to do it is:
create access function
int access_func(void)
{
static SEM_ID sem_id=0;
if(sem_id == 0)
{
/* create mutex semaphore \u2013 semMCreate(\u2026) */
}
return_code=semTake(sem_id, time_out);
if(return_code != 0) return errno;
/* read or write to or from global memory */
semGive(sem_id);
}
Notes:
There are several options in semMCreate and several options in semTake
You can create the mutex at the system init (recommended).
If the mutex is not read, and the sem_id is still zero, semTake return ERROR (you can check the return code and the system errno)

Related

Reactive programming - running jobs in a cluster

I need to run some jobs in a cluster, only one at a time.
Because my team uses Hazelcast, I ended up with a solution based on
Hazelcast ILock implementation. For the purpose of the question, I am going to make a generalisation about it. Let's suppose we have the following interfaces (that could be easily implemented e.g. by Hazelcast or Reddison (Redis)):
public interface MyDistributedLock {
boolean lock();
void unlock();
boolean isLockedByCurrentThread();
}
public interface MyLockDistributedFactory {
MyDistributedLock getLock(String name);
}
And lock method waiting if lock cannot be acquired:
private Mono<Void> lock(String name, Publisher<?> publisher, MyLockDistributedFactory myLockFactory) {
// important to release lock on the same thread as
// it was aquired
Scheduler scheduler = Schedulers.newSingle(name.toLowerCase());
return Mono.defer(() -> Mono.just(myLockFactory.getLock(name)))
publishOn(scheduler)
.doOnNext(MyDistributedLock::lock)
.doOnNext(lock -> LOGGER.info("Process acquired lock for resource {}", name))
.flatMapMany(lock -> Flux.from(publisher))
.publishOn(scheduler)
.doFinally(signalType -> {
MyDistributedLock lock = myLockFactory.getLock(name);
if (signalType == SignalType.CANCEL) {
// cancel ignores publishOn
scheduler.schedule(() -> {
lock.unlock();
LOGGER.info("Process released lock for resource {} due to signal type {}", name, signalType);
});
} else if (lock.isLockedByCurrentThread()) {
lock.unlock();
LOGGER.info("Process released lock for resource {} due to signal type {}", name, signalType);
}
})
.then();
}
And example of some job
private Mono<Void> someJobRunEveryOneHourOnEveryNodeInCluster() {
MyLockDistributedFactory hazelcast = ...;
return lock("some-job", Flux.just(1,2), hazelcast)
.repeatWhen(afterOneHour());
}
I wonder whether this is a good approach of using Project reactor (and correct implementation) or it should be done in a different way. Please advice.
it is a correct approach when using Reactor, because you took care of offsetting the blocking portion into a dedicated Scheduler/Thread.
But I'd say mutually exclusive code like this is not a very good fit for reactive programming in general: you lose one of the key benefits of doing more with less threads, you risk blocking other parts of the application should you forget to publishOn a dedicated thread, etc...

STM32F412 using FreeRTOS and USB to do audio processing

I am using stm32f4 nucleuo board. I can transmit the audio data through usb to PC without FreeRTOS. Now I want to learn how to integrate the FreeRTOS and usb together. But I have some questions about how fundamentally threads and ISR interact with each other.
Below I have two files.
In main.c, there are two threads created.In usb_thread, I initialize usb dirver and do nothing else.
In vr_thread, it waits state == 1 and process PCM_Buffer.
/* main.c */
extern uint16_t PCM_Buffer[16];
int state = 0;
int main(void)
{
HAL_Init();
SystemClock_Config();
MX_GPIO_Init();
osThreadDef(usb_t, usb_thread, osPriorityNormal, 0, configMINIMAL_STACK_SIZE);
osThreadDef(vr_t, vr_thread, osPriorityNormal, 0, configMINIMAL_STACK_SIZE);
usb_thread_handle = osThreadCreate (osThread(usb_t), NULL);
usb_thread_handle = osThreadCreate (osThread(vr_t), NULL);
osKernelStart();
for (;;) {}
}
static void usb_thread(void const *argument)
{
/*Do some initialization here.*/
for (;;) {}
}
static void vr_thread(void const *argument)
{
/*Do some initialization here.*/
for (;;) {
if (state == 1) {
state = 0;
process_buffer(PCM_Buffer);
}
}
}
In app.c, USB_AUDIO_CallBack will be called by usb ISR every 1 millisecond. It transmit PCM_Buffer to PC first because it is really important, then it changes state to 1.
/* app.c */
uint16_t PCM_Buffer[16];
extern int state;
void USB_AUDIO_CallBack(void) //It will be called by usb ISR every 10^-3 second.
{
Send_Audio_to_USB((int16_t *)(PCM_Buffer), NUM_AUDIO_BUF);
state = 1;
}
Here are my questions.
1. How to find out the unit counting tick of FreeRTOS? USB_AUDIO_CallBack will be
called every 1 millisecond, how to know FreeRTOS basic tick is faster or slower
than 1 millisecond. Is FreeRTOS tick equal to systick?
2. Let's assume the process time of process_buffer is less than 1 millisecond. What I want to accomplish here is described below
hardware trigger
|
usb ISR
|
USB_AUDIO_CallBack
|
state=1
|
vr_thread process_buffer
|
state=0, then wait for hardware trigger again.
I really doubt it is the correct way to do it. Or should I use suspend() and resume()?
3. Is using extern to declare global PCM_Buffer the correct way to pass variable between threads or should I use queue in FreeRTOS?
I know these questions are trivial but I really want to understand them. Any helpful document or website is welcome. Thanks.
To convert real time to systick you can use macro pdMS_TO_TICKS(xTimeInMS).
You can define your USB_AUDIO_CallBack also as a thread (or task) or paste the code from the callback to vr_thread (as your application works on only one processor). Then inside the USB ISR you can send a notification using function vTaskNotifyGiveFromISR and receive it inside vr_thread by calling ulTaskNotifyTake. After receiving the notification you can call Send_Audio_to_USB((int16_t *)(PCM_Buffer), NUM_AUDIO_BUF);
and then process_buffer(PCM_Buffer);. It is better to bring out the code from callback to task, because the ISR handler will finish it's job faster as Send_Audio_to_USB function could run long time. You also keep things to be executed in the same order as you needed.
I think that you mean volatile instead of extern. If you want to use this buffer along different threads and ISRs you should define it as volatile, but if you will use the approach with only one task you can declare this buffer as local buffer.

How to wake up a process blocked by pause()?

I need to block and wake a process using SIGUSR2 and SIGUSR1 respectively. Below here's my signal handler sub routine. How do I wake a process blocked by pause?
void sig_handler(int sig) {
static int i = 1;
if(sig == SIGUSR2) {
pause();
}
else if(sig == SIGUSR1) {
/* I don't what to write here */
}
}
Also, I read somewhere pause() is not a good programming practice, is there any other means to suspend a process for some time?
See this page
In general, doing a lot of works in signals is ... tricky. Some things are not async-signal-safe, and therefore it makes robust programming there a bit difficult. In your case, pause() waits for a signal to arrive, but since you are calling it from the signal handler, it is not going to work there (I think).
As to making the process sleep and resume on signals. Look at the page I linked above. The best way is to have the signal handlers simply set flags and have the main thread (i.e. in main() or in an event loop) react to these flags. As recommended by the page, use sigsuspend when SIGUSR2 is received to pause the process until SIGURS1 is received.
It's simple. Use the 'kill' system call-
void sig_handler(int sig) {
static int i = 1;
if(sig == SIGUSR2) {
pause();
}
else if(sig == SIGUSR1) {
kill(<pid of process to wake up>, sig);
// make sure that process with pid has registered for sig
}
}

Asynchronously start only one Task to process a static Queue, stopping when it's done

Basically I have a static custom queue of objects I want to process. From multiple threads, I need to kick off a singular Task that will process the queued objects, stopping the task when all items are dequeued.
Some psuedo code:
static CustomQueue _customqueue;
static Task _processQueuedItems;
public static void EnqueueSomething(object something) {
_customqueue.Enqueue(something);
StartProcessingQueue();
}
static void StartProcessingQueue() {
if(_processQueuedItems != null) {
_processQueuedItems = new Task(() => {
while(_customqueue.Any()) {
var stuffToDequeue = _customqueue.Dequeue();
/* do stuff */
}
});
_processQueuedItems.Start();
}
if(_processQueuedItems.Status != TaskStatus.Running) {
_processQueuedItems.Start();
}
}
If it makes a difference my custom queue is a queue that essentially holds items until they reach a certain age, then allows them to dequeue. Everytime an item is touched its timer starts again. I know this piece works fine.
The part I'm struggling with is the parallelism. (Clearly, I don't know what I'm doing here). What I want is to have one thread process the queue until it's complete, then go away. If another call comes in it doesn't start a new thread unless it has to.
I hope that explains my issue okay.
You might want to consider using BlockingCollection<T> here. You could make your custom queue implement IProducerConsumerCollection, in which case BC could use it directly.
You'd then just need to start a long running Task to call blockingCollection.GetConsumingEnumerable() and process the items in a foreach. The task will automatically block when the collection is empty, and restart when a new item is Enqueued.

.NET 4.0 Threading.Tasks

I've recently started working on a new application which will utilize task parallelism. I have just begun writing a tasking framework, but have recently seen a number of posts on SO regarding the new System.Threading.Tasks namespace which may be useful to me (and I would rather use an existing framework than roll my own).
However looking over MSDN I haven't seen how / if, I can implement the functionality which I'm looking for:
Dependency on other tasks completing.
Able to wait on an unknown number of tasks preforming the same action (maybe wrapped in the same task object which is invoked multiple times)
Set maximum concurrent instances of a task since they use a shared resource there is no point running more than one at once
Hint at priority, or scheduler places tasks with lower maximum concurrent instances at a higher priority (so as to keep said resource in use as much as possible)
Edit ability to vary the priority of tasks which are preforming the same action (pretty poor example but, PredictWeather (Tommorrow) will have a higher priority than PredictWeather (NextWeek))
Can someone point me towards an example / tell me how I can achieve this? Cheers.
C# Use Case: (typed in SO so please for give any syntax errors / typos)
**note Do() / DoAfter() shouldn't block the calling thread*
class Application ()
{
Task LeafTask = new Task (LeafWork) {PriorityHint = High, MaxConcurrent = 1};
var Tree = new TaskTree (LeafTask);
Task TraverseTask = new Task (Tree.Traverse);
Task WorkTask = new Task (MoreWork);
Task RunTask = new Task (Run);
Object SharedLeafWorkObject = new Object ();
void Entry ()
{
RunTask.Do ();
RunTask.Join (); // Use this thread for task processing until all invocations of RunTask are complete
}
void Run(){
TraverseTask.Do ();
// Wait for TraverseTask to make sure all leaf tasks are invoked before waiting on them
WorkTask.DoAfter (new [] {TraverseTask, LeafTask});
if (running){
RunTask.DoAfter (WorkTask); // Keep at least one RunTask alive to prevent Join from 'unblocking'
}
else
{
TraverseTask.Join();
WorkTask.Join ();
}
}
void LeafWork (Object leaf){
lock (SharedLeafWorkObject) // Fake a shared resource
{
Thread.Sleep (200); // 'work'
}
}
void MoreWork ()
{
Thread.Sleep (2000); // this one takes a while
}
}
class TaskTreeNode<TItem>
{
Task LeafTask; // = Application::LeafTask
TItem Item;
void Traverse ()
{
if (isLeaf)
{
// LeafTask set in C-Tor or elsewhere
LeafTask.Do(this.Item);
//Edit
//LeafTask.Do(this.Item, this.Depth); // Deeper items get higher priority
return;
}
foreach (var child in this.children)
{
child.Traverse ();
}
}
}
There are numerous examples here:
http://code.msdn.microsoft.com/ParExtSamples
There's a great white paper which covers a lot of the details you mention above here:
"Patterns for Parallel Programming: Understanding and Applying Parallel Patterns with the .NET Framework 4"
http://www.microsoft.com/downloads/details.aspx?FamilyID=86b3d32b-ad26-4bb8-a3ae-c1637026c3ee&displaylang=en
Off the top of my head I think you can do all the things you list in your question.
Dependencies etc: Task.WaitAll(Task[] tasks)
Scheduler: The library supports numerous options for limiting number of threads in use and supports providing your own scheduler. I would avoid altering the priority of threads if at all possible. This is likely to have negative impact on the scheduler, unless you provide your own.