What is the standard method of creating multiple streams of development of the same project in RTC source control?
Currently to create a single stream I create a repository workspace & its corresponding stream. I check in the project to the workspace and then deliver it to this new stream. To create a new stream of development for the project do I need to repeat this process or is there a better way, maybe using the command line ?
No, you don't need to repeat the process.
I would recommend putting a baseline on the component you delivered in the first stream. Or put a snapshot on the first stream. That will label all the components in that stream.
Then you create a second stream, which you can :
fill component by component, specifying for each one a baseline
or specifying directly at snapshot, which will put all the components with their associated labels in that new stream.
Then you create your repository workspace and start working.
So the idea behind a new stream is to specify from what version you want to start working.
Hence the baselines or snapshot put in the first stream : that will help initialize the next stream.
Without having to re-import everything.
Related
While configuring a particular data pipeline in Mosaic Decisions, I want to try out different operations by using the available process nodes. I would like to keep the first few configured nodes for future reference and continue to add some other nodes.
To do this, I'm currently cloning the flow after each incremental change. But, due to this, many flows are getting configured and it becomes very difficult to keep track.
Is there any alternative way to save the history of these multiple configurations of the flow for future reference without cloning and executing them separately?
You can save the history of changes you have made in the flow by simply saving it as a version using Save As Version option provided in the canvas header.
You can also add a description for each of the incremental steps and edit a particular version later if you want. Later, you can also execute each of the saved versions separately by publishing that version from the Version tab and then
executing it normally.
I want to (automatically, but as part of a pipeline) archive some existing files, by moving them to a new folder.
I've written a pipeline to do that, but since it's a "Copy-and-delete-Original" command, the new file has a new Timestamp.
Is there any way to retain the original timestamps, either by actually moving the file, or by explicitly setting the LastModified date? (there doesn't appear to be a setting on the copy data activity to retain the Timestamp :(
I don't think this is supported through ADF's web UI. I could be wrong, but I haven't see a way to do it.
But you could call the REST API for Blob services and set the lastmodifieddate that way. You could get the file's original lastmodifieddate using the getmetadata activity and then copying the file to the new location, and then call the REST API and reset the property.
https://learn.microsoft.com/en-us/rest/api/storageservices/set-blob-properties
Using NiFi v0.6.1 is there a way to import backups/archives?
And by backups I mean the files that are generated when you call
POST /controller/archive using the REST api or "Controller Settings" (tool bar button) and then "Back-up flow" (link).
I tried unzipping the backup and importing it as a template but that didn't work. But after comparing it to an exported template file, the formats are reasonably different. But perhaps there is a way to transform it into a template?
At the moment my current work around is to not select any components on the top level flow and then select "create template"; which will add a template with all my components. Then I just export that. My issue with this is it's a bit more tricky to automate via the REST API. I used Fiddler to determine what the UI is doing and it first generates a snippet that includes all the components (labels, processors, connections, etc.). Then it calls create template (POST /nifi-api/contorller/templates) using the snippet ID. So the template call is easy enough but generating the definition for the snippet is going to take some work.
Note: Once the following feature request is implemented I'm assuming I would just use that instead:
https://cwiki.apache.org/confluence/display/NIFI/Configuration+Management+of+Flows
The entire flow for a NiFi instance is stored in a file called flow.xml.gz in the conf directory (flow.xml.tar in a cluster). The back-up functionality is essentially taking a snapshot of that file at the given point in time and saving it to the conf/archive directory. At a later point in time you could stop NiFi and replace conf/flow.xml.gz with one of those back-ups to restore the flow to that state.
Templates are a different format from the flow.xml.gz. Templates are more public facing and shareable, and can be used to represent portions of a flow, or the entire flow if no components are selected. Some people have used templates as a model to deploy their flows, essentially organizing their flow into process groups and making template for each group. This project provides some automation to work with templates: https://github.com/aperepel/nifi-api-deploy
You just need to stop NiFi, replace the nifi flow configuration file (for example this could be flow.xml.gz in the conf directory) and start NiFi back up.
If you have trouble finding it check your nifi.properties file for the string nifi.flow.configuration.file= to find out what you've set this too.
If you are using clustered mode you need only do this on the NCM.
I'm writing a program to control two similar devices in Labview. In order to avoid copying the code I use subVIs. But I have a piece of code where I update some values on the GUI inside a while loop. I'd like to know if it is possible to somehow have this loop inside my subVI and have the subVI sending one of the output parameters after each iteration.
To update your GUI from within a subVI you can do one of the following:
Create a queue or notifier in your top level VI and pass the reference in to your subVI. In the subVI, send the data to the queue or notifier. In the top level VI, have a loop that waits for data on the queue or notifier and writes that to the front panel indicator.
Create a control reference to the front panel indicator in the top level VI and pass the reference to your subVI. In the subVI, use a property node to write the Value property of the indicator.
If you look at the LabVIEW help for the terms in bold you'll find documentation and examples for how to use them.
Of these options, I would use a queue for any data where it's important that the top level VI receives every data point (e.g. if the data is being plotted on a chart or logged to a file) or a notifier where it's only necessary that the user sees the latest value. Using control references for this purpose is a bit 'quick and dirty' and can cause performance issues.
If you need to update more than a couple of indicators like this, you'll probably want to build a cluster containing the data you send to the queue/notifier, or containing the control references. Save your cluster as a typedef so that you can modify its contents without breaking your code.
Another option is a channel wire. A channel wire will send data from a producer loop to a consumer loop without the overhead of a reference & property node and without having to create and close a queue or notifier reference. If you make a simple vi with writer and reader loops as shown in the LabView Help, then select the writer loop and go to Edit -> Create SubVI, you'll have a template to use.
We have different streams for different environments. It is a grail project. So there is a property file called application.properties which has a property called app.version. I want that to be updated automatically post every promote done on the stream. Each stream will have different version number. Trigger server_post_promote_trig will be used to handle the post promote operation, but I am not sure how to access the files in the stream through script. I tried to give the path as /Folder1/file as reflected in the xml trigger input file, but I cannot update the file as trigger perl file complains it cannot find the file.
Any help is much appreciated.
If I understand your question correctly. You want to increment the version in a file under source control when ever a promotion occurs in the stream. If this is correct, you need to create a workspace off said stream which will edit/keep/promote the new version of this file. I would create a separate script that gets called by the server_post_promote trigger whenever a promotion occurs in this stream. This script would be placed under src control which is accessible in the workspace you created above.
In Accurev, files can only be modified via a workspace. As this is the case It may be better to implement a pre promote trigger to update this version information in the file when the user is performing the workspace to stream promote.
This would be similar to the existing Addheader script that can be found in the examples directory on the accurev server.
Also, within the script, you will probably want to build in logic to detect the promotion of the version file, to block updating the file again.