Tivoli command line to search objects - tivoli-work-scheduler

Is there a CLI command to find Tivoli objects that matches a search pattern?
For example, I need to search if any object with name containing "DUMMY" is present? If so, what is the type of the object (is it a Job Stream, or a Tivoli Job or a Calendar or an Event or a Resource)
I came across a concept that Tivoli stores all its object defiNitions in a database like repository, and we can execute SQL like SELECT queries as to find the object definition, but I am not sure how to do that using a CLI command.

composer command line allows to filter using wildcards (# and ?), but you have to run the command once for each object type

Related

Saving output from parsing json file and passing it to Bigqueryinsertjoboperator

I need some advise on solving this requirement for auditing purpose . I am using airflow composer - dataflow java operator job which spits out json file after job completion with status and error message details (into airflow data folder ) . I want to extract the status and error message from json file via some operator and then pass the variable to next pipeline job Bigqueryinsertjoboperator which calls the stored proc and passes status and error message as input parameter and finally gets written into BQ dataset table.
Thanks
You need to do XCom and JINJA templating. When you return meta-data from the operator, the data is stored in XCom and you can retrieve it using JINJA templating or Python code in Python operator (or Python code in your custom operator).
Those are two very good articles from Marc Lamberti (who also has really nice courses on Airlfow) describing how templating and jinja can be leveraged in Airflow https://marclamberti.com/blog/templates-macros-apache-airflow/ and this one describes XCom: https://marclamberti.com/blog/airflow-xcom/
By combining the two you can get what you want.

Execute informatica repository sql from informatica workflow

I am new to Informatica. I am using Informatica 10.1.0 and I have created a workflow like below.
How can I make this workflow to execute the below informatica repository sql and fail the workflow is the count is greater than 0
select count(*) as cnt
from REP_TASK_INST_RUN
where workflow_run_id = (select max(workflow_run_id) from OPB_WFLOW_RUN where WORKFLOW_NAME = 'wf_Load_Customer_Transactions')
and RUN_STATUS_CODE <> 0
You have shared the view of a workflow manager. in the Informatica Designer, you can create a mapping with the source as your table. In the Source Qualifier, add a dummy query and then load this data into a designated target. Post that you can create the workflow for your mapping and run it.
https://www.guru99.com/mappings-informatica.html
The above link should be a good reference.
Once you have a functional workflow, you may add a control task for the above check in Control task to make the workflow to fail if count of target rows <1.
Design an informatica Mapping-
- SQ contains the query you have provided and output of SQ will be passed to an expression. Create a mapping variable which stores this value.
- with in the workflow using the post session workflow variable assignment- assign the mapping variable to workflow variable.
- create an assignment task which checks the value of this workflow variable- if the count >0 , use the control task to fail the workflow.
One way would be to create a mapping with your query inside of a SQL Transformation. Set it up to write to either a flat file or create a table in the DB. Add a filter to write the count to target only if it's greater than 0.
Then in the workflow, setup a session and link it to a Control Task that will fail if $TgtSuccessRows is < 1.
You can create a dummy session to put your query inside the session, then link with the next workflow. The linkage u can put $count=0. Then the next wkf session will run when the count is 0.

Variable values stored outside of SSIS

This is merely a SSIS question for advanced programmers. I have a sql table that holds clientid, clientname, Filename, Ftplocationfolderpath, filelocationfolderpath
This table holds a unique record for each of my clients. As my client list grows I add a new row in my sql table for that client.
My question is this: Can I use the values in my sql table and somehow reference each of them in my SSIS package variables based on client id?
The reason for the sql table is that sometimes we get request to change the delivery or file name of a file we send externally. We would like to be able to change those things dynamically on the fly within the sql table instead of having to export the package each time and manually change then re-import the package. Each client has it's own SSIS package
let me know if this is feasible..I'd appreciate any insight
Yes, it is possible. There are two ways to approach this and it depends on how the job runs. First is if you are running for a single client for a single job run or if you are running for multiple clients for a single job run.
Either way, you will use the Execute SQL Task to retrieve data from the database and assign it to your variables.
You are running for a single client. This is fairly straightforward. In the Result Set, select the option for Single Row and map the single row's result to the package variables and go about your processing.
You are running for multiple clients. In the Result Set, select Full Result Set and assign the result to a single package variable that is of type Object - give it a meaningful name like ObjectRs. You will then add a ForEachLoop Enumerator:
Type: Foreach ADO Enumerator
ADO object source variable: Select the ObjectRs.
Enumerator Mode: Rows in all the tables (ADO.NET dataset only)
In Variable mappings, map all of the columns in their sequential order to the package variables. This effectively transforms the package into a series of single transactions that are looped.
Yes.
I assume that you run your package once per client or use some loop.
At the beginning of the "per client" code read all required values from the database into SSIS varaibles and the use these variables to define what you need. You should not hardcode client specific information in the package.

Execute service builder generated sql file on postgresql

I would like to execute sql files generated by the service builder, but the problem is that the sql files contains types like: LONG,VARCHAR... etc
Some of these types don't exist on Postgresql (for example LONG is Bigint).
I don't know if there is a simple way to convert sql file's structures to be able to run them on Postgresql?
execute ant build-db on the plugin and you will find sql folder with vary vendor specific scripts.
Daniele is right, using build-db task is obviously correct and is the right way to do it.
But... I remember a similar situation some time ago, I had only liferay-pseudo-sql file and need to create proper DDL. I managed to do this in the following way:
You need to have Liferay running on your desktop (or in the machine where is the source sql file), as this operation requires portal spring context fully wired.
Go to Configuration -> Server Administration -> Script
Change language to groovy
Run the following script:
import com.liferay.portal.kernel.dao.db.DB
import com.liferay.portal.kernel.dao.db.DBFactoryUtil
DB db = DBFactoryUtil.getDB(DB.TYPE_POSTGRESQL)
db.buildSQLFile("/path/to/folder/with/your/sql", "filename")
Where first parameter is obviously the path and the second is filename without .sql extension. The file on disk should have proper extension: must be called filename.sql.
This will produce tables folder next to your filename.sql which will contain single tables-postgresql.sql with your Postgres DDL.
As far as I remember, Service Builder uses the same method to generate database-specific code.

create function sql server 2005 disable errors

I'm trying to write a *.bat file which runs all sql-scripts in given folder (every file in this folder has a create function script):
for /r "%~dp0\Production\Functions" %%X in (*.sql) do (
sqlcmd -S%1 -d%2 -b -i "%%X"
)
But some functions in the folder are depended on others. So I get Invalid object name error. Is there a way to disable this error?
Rename your files so that they're listed in the correct order of precedence. So, for example, if FuncA.sql uses FuncB.sql, then rename the files as 001-FuncB.sql, 002-FuncA.sql.
It is not possible to disable errors generated by SQL when you run (what I think of as) code-based object: stored procedures, functions, views, triggers, and anything else that has to be the sole object of a batch submitted to SQL.
It is also awkward at best to work around this problem. Some options:
One way, as Joe Stefanelli recommends, is to name your files such that they get executed in proper order (by name, or perhaps by date created or something more esoteric).
Another way is to group related functions in single scripts, such that referenced objects must be created before referencing objects.
Or combine the above two, putting all your dependent objects in one script you can guarantee will always run first. Not so useful if your have nested references.
A last (and more kludgy) way is to iterate over your scripts several times (assuming your "create" script will properly deal with an object that already exists), until a given pass raises no errors.
For development purposes, we store code-based objects in individual files, but when it comes time to wrap the code up for push to Production systems, I glom the files together, test it, and shuffle the contents around and retest until no more errors are generated.