two threads cannot share data in python - python-multithreading

I am new to multithread programming, I put my media_player and server in two threads, the server is to receive the data which is the operation to media_player from another client program. But the valueof "operation" i get from server isn't updata to my main thread, so the output of operation in media_player is always none, I hope it will change as the server receives data.

global operation shall be added into the function, otherwise it would just make a local copy of it and start from here.

Related

Azure Data Factory: Execute Pipeline activity cannot reference calling pipeline, cyclical behaviour required

I have a number of pipelines that need to cycle depending on availability of data. If the data is not there wait and try again. The pipe behaviours are largely controlled by a database which captures logs which are used to make decisions about processing.
I read the Microsoft documentation about the Execute Pipeline activity which states that
The Execute Pipeline activity allows a Data Factory or Synapse
pipeline to invoke another pipeline.
It does not explicitly state that it is impossible though. I tried to reference Pipe_A from Pipe_A but the pipe is not visible in the drop down. I need a work-around for this restriction.
Constraints:
The pipe must not call all pipes again, just the pipe in question. The preceding pipe is running all pipes in parallel.
I don't know how many iterations are needed and cannot specify this quantity.
As far as possible best effort has been implemented and this pattern should continue.
Ideas:
Create a intermediary pipe that can be referenced. This is no good I would need to do this for every pipe that requires this behaviour because dynamic content is not allowed for pipe selection. This approach would also pollute the Data Factory workspace.
Direct control flow backwards after waiting inside the same pipeline if condition is met. This won't work either, the If activity does not allow expression of flow within the same context as the If activity itself.
I thought about externalising this behaviour to a Python application which could be attached to an Azure Function if needed. The application would handle the scheduling and waiting. The application could call any pipe it needed and could itself be invoked by the pipe in question. This seems drastic!
Finally, I discovered an activity Until which has do while behaviour. I could wrap these pipes in Until, the pipe executes and finishes and sets database state to 'finished' or cannot finish and sets the state to incomplete and waits. The expression then either kicks off another execution or it does not. Additional conditional logic can be included as required in the procedure that will be used to set a value to variable used by the expression in the Until. I would need a variable per pipe.
I think idea 4 makes sense, I thought I would post this anyway in case people can spot limitations in this approach and/or recommend an approach.
Yes, absolutely agree with All About BI, its seems in your scenario the best suited ADF Activity is Until :
The Until activity in ADF functions as a wrapper and parent component
for iterations, with inner child activities comprising the block of
items to iterate over. The result (s) from those inner child
activities must then be used in the parent Until expression to
determine if another iteration is necessary. Alternatively, if the
pipeline can be maintained
The assessment condition for the Until activity might comprise outputs from other activities, pipeline parameters, or variables.
When used in conjunction with the Wait activity, the Until activity allows you to create loop conditions to periodically check the status of specific operations. Here are some examples:
Check to see if the database table has been updated with new rows.
Check to see if the SQL job is complete.
Check to see whether any new files have been added to a specific
folder.

Does using sqlconnection.clearpool remove a single instance of a process from an app pool?

If all connections with identical sql connection strings are dropped regardless of the individual instance calling the clearpool method, this sounds like a difficulty to me. We have an issue where the close and dispose methods of a sql connection don't actually clear it from the list of connections in the sql activity monitor, and we get a backlog of instances of this same stored procedure being called or active in some way. Based on this idea of all instances of the same process being cleared from the pool based on a single call from a single instance, it sounds as if any instance performing a sql transaction at the time it's being called would be dropped and cause an outage in the transaction that's occurring in mid-process.
A particular wrinkle in this for us is that several people are using our software product at the same time, and the sql connection strings referenced in the vb code are set up using the same variable name for everyone-- but that doesn't mean that all the actual strings assigned to the variable at runtime are the same, does it?
Is the backup of calls to the same procedure something that would be fully cleared from the queue using the .clearpool method, or would only the single instance be cleared? If the single instance is cleared, that's great.
I'm planning to test the sqlconnection.state to see if it's performing an action before using .clearpool to be sure it doesn't drop the connection while the stored procedure is running.
Many misconceptions here.
regardless of the individual instance calling the clearpool method
You cannot call this method on any instance. It is static. C# allows you to write it like an instance call but really it is not.
We have an issue where the close and dispose methods of a sql connection don't actually clear it from the list of connections in the sql activity monitor
That is the whole purpose of pooling. The physical connection stays alive. All settings made on the logical connection are reset, through.
and we get a backlog of instances of this same stored procedure being called or active in some way
Highly unlikely. When a connection is recycled it is reset. All transactions are rolled back. When you close a connection all running statements are killed. Note, though, that the reset happens when the connection is taken. Not when it is put back. For that reason you should explicitly rollback transactions that you do not wish to commit. Do this simply by disposing the reader and transaction objects.
it sounds as if any instance performing a sql transaction at the time it's being called would be dropped and cause an outage in the transaction that's occurring in mid-process.
Clearing the pool only affects connections that are not in use. This is transparent to you.
the sql connection strings referenced in the vb code are set up using the same variable name for everyone-- but that doesn't mean that all the actual strings assigned to the variable at runtime are the same, does it?
Why wouldn't it? Not enough information here to see any reason why.
Is the backup of calls to the same procedure something that would be fully cleared from the queue using the .clearpool method, or would only the single instance be cleared?
This statement is based on false assumptions. Clearing the pool has no effect on connections that are in use. That would be a horrible design choice.
Never clear the pool. Simply dispose of your connections when you no longer need them.

Read Tibco messages from a VB.Net application

I am new to the world of Tibco... I have been asked to create an VB.net application to do couple of things:
Update the value of a column in a database (which then generates a message in TIBCO EMS).
My application then needs to read this message from TIBCO and determine if the message has a particular word in it, and display the result as Pass or Fail
I have already written the first piece of the task, however, I have no clue on how to proceed on the second one. I am hoping to get some kind of help/guidance on how to proceed! Any suggestions?
Thanks,
NewTibcoUser
This can be done easily depending on which Tibco Tools you own. If you have BW and ADB (Active Database Adapter) then you can use that.
option 1:
If you don't have adb you can mimic it by doing something like the following (ADB isn't magical its pretty strait forward)
1) Create a Mirror of the table that is being monitored for changes (You could just put in the column you want to monitor plus the key)
Key
ColumnYouWantToMonitor
DeliveryStatus (Adb_L_DeliverStatus)
Transaction type (adb_opCode)
Time It happened (Adb_timestamp)
Delivery Status (ADB_L_DeliveryStatus)
2) Create a trigger on the table That inserts a record into the table.
3) Write a .Net Process that monitors the table every 5 seconds or 10 or whatever (Make it configurable) (select * from tableX where DeliveryStatus = 'N' order by transactionTime)
4) Place the message on the EMS Queue or do a service call to you .Net App.
Option 2
1) Create a trigger on the table and write the event to a SQL Server Brokering Service Queue
2) Write a .Net app that reads from that SSBS queue and converts it into a EMS Message
some design considerations
Try not to continually query (Aka poll) for changes on your main table (prevent blocking)
If your app is not running and DB changes are happening ensure that you have a message expire time. So when your app starts it doesn't have to process 1000's of messages off the queue (Depending if you need the message or not)
If you do need the messages you may want to set the Queue to be persistent to disk so you don't loose messages. Also Client acknowledgement in your .Net app would be a good idea not just auto ack.
As you mention, the first point is already done (Perhaps with ADB or a custom program reacting to the DB insert).
So, your problem is strictly the "React to content of an EMS message from VB.Net" part.
I see two possibilities :
1- If you have EMS, ADB and BW, make a custom Adapter subscriber (a BW config) to change the DB in some way in reaction to messages on the bus. Your VB application can then simply query the DB to get the response status.
2- If you don't have so many products from the TIBCO stack, then you should make a simple C# EMS client program (see examples provided within EMS docs). This client can then signal you VB application (some kind of .Net internal signaling maybe, I am not an expert myself) or write the response status in DB.

Continuously checking database from a Windows service

I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is > than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider
Be aware that Notification Services is only for SQL 2005, and has been dropped from SQL 2008.
Rather than polling the database for changes, I would recommend writing a CLR stored procedure that is called from a trigger, which is raised when an appropriate change occurs (e.g. insert or update). The CLR sproc alerts your service which then performs its work.
Sending the service alert via a TCP/IP or HTTP channel is a good choice since you can deploy your service anywhere, just by modifying some configuration parameter that is read by the sproc. It also makes it easy to test the service.
I would use an event driven model in your service. The service waits on an auto-reset event, starting a block of work when the event is raised. The sproc communications channel runs on another thread and sets the event on each incoming request.
Assuming the service is doing a block of work and a set of multiple pending requests are outstanding, this design ensures that those requests trigger just 1 more block of work when the current one is finished.
You can also have multiple workers waiting on the same event if overlapping processing is desired.
Note: for external network access the CREATE ASSEMBLY statement will require the PERMISSION_SET option to be set to EXTERNAL_ACCESS.
Given you talk about the service provider, I suspect one of the main alternatives will not be open to you, which is notification services. It allows you to register for data changed events and be notified, without the need to poll the database. It does however require service broker enabled for it to work, and that potentially could be a problem if it is hosted - some companies keep it switched off.
The question is not tagged to a specific database just SQL, the notification services is a SQL Server facility.
If you're using SQL Server and open to a different approach, check out SQL Server Notification Services.
Oracle also provides notifications, the call it Database Change Notification

Can Sql Server 2008 Stored Procedures (or Triggers) manually parallel or background some logic?

If i have a stored procedure or a trigger in Sql Server 2008, can it do some sql calculations 'in another non-blocking thread'? ie. something in the background
also, can two sql code blocks be ran in parallel? or two stored procs be ran in parallel?
for example. Imagine we are given the job calculating the scores for each Stack Overflow user (and please leave all 'do that elsehwere/service/batch/overnight/etc, elswhere) after a user does some 'action'.
so we have a trigger on the Post table, so when a new post is INSERTED, the trigger fires off and part of that logic, it calculates the user's latest score. Instead of waiting for the stored proc to finish and block the current sql thread / executire, can we ask it to calc the score in the background OR parallel.
cheers!
SQL Server does not have parallel or deferred execution: each block of running code in a connection is serial, one line after the other.
To decouple processing, you usually have to use SQL Server Agent jobs or use Service broker. These start executing in a new connection, new session etc
This makes sense:
What if you want to rollback your changes? What does the background thread do and how does it know?
What data does it use? New, Old, lock wait, snapshot?
What if it gets ahead of the main thread and uses stale data?
No, but you could write the request to a queue. Service Broker, a SQL Server component, provides support for this kind of thing. It's probably the best option available for asynchronous processing.