Bulk edit UrbanCode configuration? - repository

I want to do some bulk search/edit operation on the scripts embedded in our UrbanCode components and applications, and possibly on the flowcharts and blueprints. Unfortunately a lot of this is stored in UrbanCode's own repository, where it can only be access through the browser GUI and I can't do things like grep for common patterns across the whole set.
Is there any documented way to check out/check in, or at least download, a copy of an entire UCD environment as text files that I could analyze?
Thanks.

I think the closest documented way to get some of the things you are looking for is to export the application and to search through the json file. Component processes with all their steps are included in the application export.

Related

I am going through project configuration in craftcms

I am going through documentations for project configuration in craftcms not sure what is the different between project-config/write and project-config/modify?
project-config/write takes the config currently stored in the database and writes it out as YAML files in the config/ folder. You usually don't need this since Craft does this automatically whenever you change something in the backend (unless you have turned that off).
project-config/rebuild attempts to rebuild the entire project-config based on the state of the entire database. This is only required in rare edge cases.
Rest you can check the official documentation here:
https://craftcms.com/docs/4.x/project-config.html#what-s-stored-in-project-config

Is there a way to import backups in NiFi?

Using NiFi v0.6.1 is there a way to import backups/archives?
And by backups I mean the files that are generated when you call
POST /controller/archive using the REST api or "Controller Settings" (tool bar button) and then "Back-up flow" (link).
I tried unzipping the backup and importing it as a template but that didn't work. But after comparing it to an exported template file, the formats are reasonably different. But perhaps there is a way to transform it into a template?
At the moment my current work around is to not select any components on the top level flow and then select "create template"; which will add a template with all my components. Then I just export that. My issue with this is it's a bit more tricky to automate via the REST API. I used Fiddler to determine what the UI is doing and it first generates a snippet that includes all the components (labels, processors, connections, etc.). Then it calls create template (POST /nifi-api/contorller/templates) using the snippet ID. So the template call is easy enough but generating the definition for the snippet is going to take some work.
Note: Once the following feature request is implemented I'm assuming I would just use that instead:
https://cwiki.apache.org/confluence/display/NIFI/Configuration+Management+of+Flows
The entire flow for a NiFi instance is stored in a file called flow.xml.gz in the conf directory (flow.xml.tar in a cluster). The back-up functionality is essentially taking a snapshot of that file at the given point in time and saving it to the conf/archive directory. At a later point in time you could stop NiFi and replace conf/flow.xml.gz with one of those back-ups to restore the flow to that state.
Templates are a different format from the flow.xml.gz. Templates are more public facing and shareable, and can be used to represent portions of a flow, or the entire flow if no components are selected. Some people have used templates as a model to deploy their flows, essentially organizing their flow into process groups and making template for each group. This project provides some automation to work with templates: https://github.com/aperepel/nifi-api-deploy
You just need to stop NiFi, replace the nifi flow configuration file (for example this could be flow.xml.gz in the conf directory) and start NiFi back up.
If you have trouble finding it check your nifi.properties file for the string nifi.flow.configuration.file= to find out what you've set this too.
If you are using clustered mode you need only do this on the NCM.

SQL Database in GitHub

I am building a Java app that uses an SQLite database to hold most of its data. For the end-user, the database would be almost entirely read-only, with very occasional edits. I'll (theoretically) be displaying/distributing it through my GitHub page, so my question is:
What's the best way to load the database into GitHub? (I'm using IntelliJ with DataGrip.)
I'd prefer to be able to update the database when I commit/push, instead of having to overwrite the whole file. The closest question I can find is How to include MySQL database schema on GitHub? but there could potentially be hundreds or thousands of entries, so I can't just rebuild the tables when the user installs the app.
I'm applying for entry-level developer jobs, and this project is going to be my main portfolio piece during job-hunting. I'm trying to make sure it is not only functional but also makes a good impression. Any help is (very) greatly appreciated.
EDIT:
After moving my .db file into the folder connected to GitHub (same level as my src folder) apparently I can now commit/push it with the rest of my files. How do I make sure that the connection from my Java code to the database stays valid once it is loaded onto another user's system? Can I just stick with
connection = DriverManager.getConnection("jdbc:sqlite:mydatabase.db");
or do I need to rework the path?
Upon starting, if your application can't find a corresponding sqlite database file, have it create one. Then do initial load of your tables from either CSV, JSON or XML files.
You can upload these files to Git, as they are text formats.

List or schema of all dcm4chee xsl templates

recently we managed to solved some data transferring problem by finding out there is additional .xsl we could use. Since .xsl files seems to be main way of controlling information flow in dcm4chee (beside jmx configurations ofc) im wondering whether there is some kind of list or index or something like that with enumerated all .xsl files one could use and their places in workflow.
I mean it would be nice to know exactly in which points we could have some influence on process.
I tried to google something like that but no success so far :/
Any help will be appreciated.
Some online list of demo .xsl files is available at
https://svn.code.sf.net/p/dcm4che/svn/dcm4chee/dcm4chee-arc/trunk/dcm4jboss-hl7/src/etc/conf/dcm4chee-hl7
Which one will be used is configurable through the jmx console and the configuration is then written back into the configuration files.
So searching for .xsl both as file extension and as full text search pattern in your dcm4chee installation directory will point you exactly to the places where the workflow can be influenced.
It is Java-style, Linux-style open source project so don't expect it to be too user friendly or Google friendly. I'm not aware of any easy to consume overview table/list/index but it would be nice to have one. It is open source so maybe you can add one..

OSX: Hook file read event

I have a particular file I want to monitor for file read attempts by all applications on OSX. I'd like to be able to interrupt the requests so I could decide which applications have permission to read the file and which don't (by querying the user, or checking a cache of user responses). Is this possible with the OSX API? If not, is it even possible to get a list of which applications or processes do read a file?
I'm not saying there's no way to do this, but what #Jonathan is talking about isn't it.
That API is for tracking the creation, change, and destruction of files. Notably this tool is used by things like Spotlight to watch activity on the filesystem for new, interesting files.
But, wisely, reading isn't one of the events it tracks.
And even if reading WAS tracked, it is still the wrong mechanism, as it's a notification system after the fact, not in line with the call itself.
I seriously doubt what you want is possible the way you describe it.
With Access Control Lists, you can limit access at the user level (Fred can read the file, but Bob can not). This is a setting on the file itself. But there's no mechanism to allow Bobs App1 to read a file, while Bobs App2 can not, since there's really no formal mechanism of "application identity" beyond the command to executed, or whatever the program "says" its name is (both of which can be spoofed if motivated enough).
However, feel free to crawl the Darwin sources -- no doubt the answer is buried in there somewhere near the open(2) call.
EDIT, regarding comment.
What are you trying to do? What's the overall context?
Another thing that you may want to try is to use FUSE.
FUSE is a utility that let's you have "user space filesystems". People use FUSE for many purposes, like reading NTFS volumes, or mounting remote system via SSH.
They have a simple example, that gives you a skeleton that you can fill in for your purposes.
For most of the use cases, you'll simple defer to the system. However, for OPEN you will add your logic. Then you could point your FUSE utility at a directory, and "mount it". Then all of the files below that directory can use your new behavior.
I'm still not sure how you will identify Apps by name, but if it's not a real "security" issue, just for local control, I imaging you can come up with something. Activity Monitor has apps names, so they must be available, and FUSE will be running within the process space (I think), rather than through some external mechanism.
All that said, I think FUSE is your best bet, but it's probably not appropriate if you want to do this to "any file" with no preparation by the user (like not installing FUSE). If you wanted to do "any file", your FUSE system would need to be mounted at root, and then you'll simply have a full "clone" of the filesystem, with those files from the normal root "unprotected", while those from your new FUSE root will be protected. So, if someone wanted to NOT use your FUSE system, the real file is readily available to them through the actual file location.
If not, is it even possible to get a list of which applications or processes do read a file?
The command-line tool fs_util allows you to monitor filesystem activity, including reading.
The writings of Amit Singh should come in very handy. He explored the API that provides FileSystem events a few years ago, and provided a sample tool that allows you to intercept FS events. It's open source!
If i remember his conclusion properly, their isn't an official API, but you can use apple's tools to achieve what you want.