Consider for example this modified Simple TCP sample program:
How can I display the current state of the program like
Wait for Connection
Connected
Connection terminated
on the frontpanel, depending on where the "data flow" currently is.
The easiest way to do this is to place a string indicator on your front panel and write messages to a local variable of this indicator at each point where you want to see a status update.
You need to keep in mind how LabVIEW dataflow works: code will execute as soon as the data it depends on becomes available. Sometimes you can use existing structures to enforce this - for example, if you put a string constant inside your loop and wire it to a local variable terminal outside the loop, the write will only happen after the loop exits. Sometimes you may need to enforce that dataflow artificially, for example by placing your operation inside a sequence frame and connecting a wire to the border of the sequence: then what's inside the sequence will only happen after data arrives on that wire. (This is about the only thing you should use a sequence for!)
This method is not guaranteed to be deterministic, but it's usually good enough for giving a simple status indication to the user.
A better version of the above would be to send the status messages on a queue or notifier which you read, and update the status indicator, in a separate loop. The queue and notifier write functions have error terminals which can help you to enforce sequence. A notifier is like the local variable in that you will only see the most recent update; a queue keeps all the data you write to it in the right order so would be more suitable if you want to log all the updates to a scrolling list or log file. With this solution you could add more features: for example the read loop could add a timestamp in front of each message so you could see how recent it was.
A really good solution to this general problem is to use a design pattern based on a state machine. Now your program flow is clearly organised into different states and it's very easy to add in functionality like sending a different message from each state. There are good examples and project templates for these design patterns included with recent versions of LabVIEW.
You should be able to find more information on any of the terms in bold in the LabVIEW help or on the NI website.
Related
I need to extract some data from my client's SAP ECC (the SUIM -> Users by Complex Selection Criteria -program RSUSR002)
Normally I give them a table of values that I they have to fill some field to extract what I need.
They have to make 63 different extractions (with different values of objects, for example - but inside the same transaction - you can see in the print) from their SAP, to later send to me all extracted files.
Do you know if there is an automated way to extract that, so they don't have to make 63 extractions?
My biggest problem is that every time they make mistakes. It's a lot of things to fill..
Can I create a variant and send it to them? Is it possible to export my variant so they can import it without the need to fill 63x different data?
Thank you.
When this is a task which takes considerable effort by multiple people each year, then it is something which might be worth automatizing.
First you need to find out where that transaction gets its data from. If you spend some time analyzing and debugging the program behind the transaction, you will surely find which SELECT's on which database table(s) provide that data. If you are lucky, there might even be a function module for it.
Then you just need to write an own ABAP program which performs the same selections.
Now about the interesting part: How to get that data to you. There are several approaches here. The best one depends on your requirements and your technical infrastructure. Some possibilities are:
Let users run the program in foreground, use the method cl_gui_frontend_services=>gui_download to save the data to a file on the user's PC and ask them to send it to you via email
Run the program in background and save the file on the application server. Then ask your sysadmins how to get that file from their application server to you. The simplest way would be to just map a network fileserver so they all write to the same place, but there might be some organizational hurdles in the way which prevent that. (Our security people would call me crazy if I proposed to allow access to SMB shares from outside of our network, but your mileage may vary)
Have the program send the data to you directly via email. You can send emails from an SAP system using the function module SO_NEW_DOCUMENT_ATT_SEND_API1. This of course requires that the system was configured to be able to send emails (which you can do with transaction code SCOT). Again, security considerations apply. When it's PII or other confidential data, then you should not send it in an unencrypted email.
Use an RFC call to send the data to your own SAP system which aggregates the data
Use a webservice call to send the data to your own non-SAP system which aggregates the data
You can create a recording in transaction SM35.
There you fill a tcode (SUIM), start recording, make some input in transaction SUIM and then press 'Execute'. Then you can go back to recording (F3 multiple times) and the system will generate some table with commands (structure is BDCDATA). You can delete unnecessary part (i.e. BACK button click) and save it to use as a 'macro'. Then you can replay this recording and it will do exactly what you did.
Also it's possible to export/import the recording to text file, so you can explore it's structure, write some VBA script to create such recording from your parameters and sent it to users. But keep in mind that blanks are meaningful.
It's a standard tools so there's no any coding in the system.
You can save the selection as a variant.
Fill in the selection criteria and press Save.
It can be reused.
You can also transport Variants if the they have a special name
I'm writing a program to control two similar devices in Labview. In order to avoid copying the code I use subVIs. But I have a piece of code where I update some values on the GUI inside a while loop. I'd like to know if it is possible to somehow have this loop inside my subVI and have the subVI sending one of the output parameters after each iteration.
To update your GUI from within a subVI you can do one of the following:
Create a queue or notifier in your top level VI and pass the reference in to your subVI. In the subVI, send the data to the queue or notifier. In the top level VI, have a loop that waits for data on the queue or notifier and writes that to the front panel indicator.
Create a control reference to the front panel indicator in the top level VI and pass the reference to your subVI. In the subVI, use a property node to write the Value property of the indicator.
If you look at the LabVIEW help for the terms in bold you'll find documentation and examples for how to use them.
Of these options, I would use a queue for any data where it's important that the top level VI receives every data point (e.g. if the data is being plotted on a chart or logged to a file) or a notifier where it's only necessary that the user sees the latest value. Using control references for this purpose is a bit 'quick and dirty' and can cause performance issues.
If you need to update more than a couple of indicators like this, you'll probably want to build a cluster containing the data you send to the queue/notifier, or containing the control references. Save your cluster as a typedef so that you can modify its contents without breaking your code.
Another option is a channel wire. A channel wire will send data from a producer loop to a consumer loop without the overhead of a reference & property node and without having to create and close a queue or notifier reference. If you make a simple vi with writer and reader loops as shown in the LabView Help, then select the writer loop and go to Edit -> Create SubVI, you'll have a template to use.
I've studied BPMN in coursework; this is my first time applying it in real-work scenarios that don't follow any of my textbook examples.
I am trying to illustrate a process where a client can either upload a CSV file, manually enter records, or both. At the end of the day, all records are loaded to a production database via a script. At the moment, I've got it like this:
But, unless one reads the notes attached to each object, this tells me that uploaded AND manual data will be present.
In BPMN how would I designate that Path "A", Path "B" OR both, could be valid? How do I label the gateway? The scripting step I anticipate putting between the data input and the production database, but I'm not quite sure, again, how to specify that the script runs ONCE based on the presence of data from EITHER feed, not both.
What would this typically look like, and thanks in advance.
In BPMN to express that Path A, Path B or both could be valid ways forward, you can use an "inclusive or" gateway. I would typically label the split with a question and the outgoing pathes with the "answers", iow conditions under which the pathes are activated. If I understand your example correctly, a possible solution could look like the following.
Whether you want to use the task types I used, depends a bit on your more specific context. My task types in that example would mean that for the "upload" the process is "waiting for an incoming message", while in the case of manual entry it is "waiting for a user to complete the task" (by entering the required data).
The example also assumes that you know before you reach the inclusive or gateway which channels you will want to use this time.
I'm planning on developing a Monopoly game using a Console application in VB.NET, but with a separate GUI (probably a Forms application) that displays the state of the Monopoly board based on the information in the Console application, so that it can be ignored or used as the players wish. I've been looking into ways of sending information between two programs, and came across Pipes, except they seem complex and I'd like to use a different method if I can avoid it. The following is the methodology I'm currently considering to send information - I'd like to know if there is any way I could improve this methodology, or if you think it's completely stupid and I should just use Pipes instead -
Program 1 is the Console application which controls everything: the state of the game depends on the Console. Program 2 is the GUI/Forms application which follows instructions sent by Program 1 and displays the board accordingly. Program 1 and Program 2 communicate using two text files, Command.txt and CommandAvailable.txt. When something changes on Program 1 - e.g. a player makes a move - a command string is made and added to a queue. Program 1 continually checks CommandAvailable.txt to ensure that the file is empty, and if so, it clears Command.txt and then appends every command string in the queue to Command.txt. When it has finished, arbitrary text is added to CommandAvailable.txt, e.g. "CommandAvailable".
Program 2 continually checks CommandAvailable.txt until it is not empty, meaning that Program 1 has added at least one command to Command.txt. Program 2 then reads every instruction on Command.txt and adds it onto a queue on the other side. CommandAvaiable.txt is then cleared, which will permit Program 1 to add more Commands to Command.txt (because it only adds commands when CommandAvailable.txt is empty and hasn't already been marked by itself.) A separate thread on Program 2 empties the queue of command strings, parses them and executes them.
For example, in the Console, Player 1 may move to Trafalgar Square (or whatever the square would be called.) Program 1/Console would add the Command "move player1 trafalgar_square" to the queue, then check CommandAvailable.txt, and if it is empty, add all the commands in the queue to Command.txt. Program 2/The GUI would check CommandAvailable.txt and as it had been marked by Program 1, read the command, add it to the queue, and then move a picturebox that represents Player 1 to a square.
Please let me know if you think this methodology could be improved, or if you think it's simply stupid and there are far better alternatives or that I should just use Pipes instead. I'm going to be using VB.NET.
We are starting a project to handle big, big flat files. These files are kind of 'normalized' and we want to process them first to an intermediate file.
I would like to see a custom table for audit rows and a custom table for errors that are thrown during processing. Also errors must be stored in the Event Log.
What are the best practices according to audit & error handling in general for SSIS (VS2008)?
(edit)
We have made (I think) very elegant solution by designing 1 master package. This package runs a child package (the one orginally intended). The master package subscribes to the 3 events like OnInformation, OnWarning and OnError. These events are routed to a generic audit & logging service that makes calls to the Enterprise Library Logging & Exception handling blocks.
What I would recommend you is to adopt the following philosophy for stable etl processes coming from files:
Never cast anything in the connector, just import the fields as nvarchars of the maximum lenght they will achieve.
Procedurally add a rowcount for error tracking in casting errors.
Cast and control each column to your specification.
If a row cannot be read at some stage, you will not know the index, but you will know that the file is malformed (extremely rare in my experience, for half transferred files), and it should be rejected anyway.
A quick screenshot of a part of a file loading process shows how the rejection (after assigning row_id) can work (link to dataflow image). To this you can add further countless checks (duplicates...) and even have a repository for the loaded files to check upon the rejects and whatever else you might want to control (Link to control flow image).
In some of my processes, I even use a flat file connector and just import each row as a bulk text and then split it in columns with an intermediate script component, allowing for different versions of the columns in the files.
Anyway, sorry not to be more detailed (due to my status I can't add more links or any images), but I hope that you understand the concept.
Regards,
Francisco.