How to view data in a queue - redis

How can I see what is in the queue named "payment" by entering a
command in powershell?
For example, by entering "keys *" we will see a list of queues. But
how exactly to view what exactly is in the queue and in what order
the data is arranged?
What do I need to write so that I can see the data inside the queue?

Related

Why is my Upsolver Kafka data source is stuck and/or not pulling any data

Kafta topic has messages but Upsolver data source is stuck or not pulling any new messages. We have about 15 such data sources and some are working fine but some seem stuck. What is happening?
When Kafka data source is created there is a configuration "Read From Start". When true (checked), the Kafka DS will only pull data if there were any messages in the topic at the time of Data source creation. If topic was empty, this Data source will get stuck and stalled. In such a case, data source should be created with this Read from start property unchecked. And then the data source will start pulling messages from the point of its creation.
Alternatively, you could play a sample dummy message per your schema in the topic before you create the data source with Read from Start = True if you don't or can't wait for data to arrive before creating.

How to check consumer group already exists in Redis?

Currently I am looking for ellegant solution to check that consumer groups in Redis stream already exist.
I have a few modules which connect to the same stream and read data from it. But they can start in different order and in case consumer groups is not created - try to create it.
In case first module have created group, others get an error according to documentation.
From the documentation:
If the specified consumer group already exists, the command returns a -BUSYGROUP error.
I would like to avoid this error.
I use Jedis client for work with Redis.
I know there is XINFO command (which can returns groups list), but it doesn't work when Redis was started in cluster mode (which can be one of my configuration).
There is no other way, as you covered in your questions there are two options:
XGROUP CREATE and catch an error in case the group is already there.
XINFO STREAM and look for the group, but that won't be atomic and a parallel group create, might be called right after you get the info back.

Snowflake COPY INTO Command return

I have a question about the snowflake COPY INTO, searched but did not get my answers.
Suppose I want to push data from snowflake to s3 bucket and using the snowflake COPY INTO command in my code, How will I know if the file is ready or command is completed? So that I can read the file from the s3 location.
You can do the following things to check whether your COPY INTO was successful or at least to retrieve some useful information about your command:
Set DETAILED_OUTPUT = TRUE and check the result (this means you get information about every single unloaded file as a output; if set to "false" you only receive information about the whole unload-process)
Query your stage by using the syntax that can be found here https://docs.snowflake.com/en/user-guide/querying-stage.html
Query the metadata of your staged data by using metadata$filename and metadata$file_row_number: https://docs.snowflake.com/en/user-guide/querying-metadata.html
Keep in mind that even a failed COPY-command can result in some unloaded files on your stage.
More information can also be found at https://docs.snowflake.com/en/sql-reference/sql/copy-into-location.html#validating-data-to-be-unloaded-from-a-query
depending on how you're actually running this.
any Snowflake interface will run synchronously so the query will just spin until it's complete.
any async call would need extra checks - the easiest one being the web interface (it will show the status of the query and when it completes the unload is complete)

Azure Data Factory Capture Error output from Notifications tab

I have a stored procedure that I use to log the progress of my ADF executions.
I can capture things like Data Factory Name (#pipeline().DataFactory) and RunId (#pipeline().RunId) and record these against the rows in the log table.
However, what I also want to capture is the error output from the notifications tab when executions fails.
For example
I tried this in the failure constraint (red arrow)
#activity('Execute LandingTbls').output
but the output in the log table from this was (not much help here)
System.Collections.Generic.Dictionary`2[System.String,System.Object]
How can this be done?
Basiclly, you can do like this:
The expression is #activity('Validation1').Error.Message.
(On my side, the activity I want to check error message is Validation1, you can change it to the activity on your side.)

Getting the JOB_ID variable in Pentaho Data Integration

When you log a job in Pentaho Data Integration, one of the fields is ID_JOB, described as "the batch id- a unique number increased by one for each run of a job."
Can I get this ID? I can see it in my logging tables, but I want to set up a transformation to get it. I think there might be a runtime variable that holds an ID for the running job.
I've tried using the Get Variables and Get System Info transformation steps to no avail. I am a new Kettle user.
You have batch_ids of the current transformation and of the parent job available on the Get System Info step. On PDI 5.0 they come before the "command line arguments", but order changes with each version, so you may have to look it up.
You need to create the variable yourself to house the parent job batch ID. The way to do this is to add another transformation as the first step in your job that sets the variable and makes it available to all the other subsequent transformations and job steps that you'll call from the job. Steps:
1) As you have probably already done, enable logging on the job
JOB SETTINGS -> SETTINGS -> CHECK: PASS BATCH ID
JOB SETTINGS -> LOG -> ENABLE LOGGING, DEFINE DATABASE LOG TABLE, ENABLE: ID_JOB FIELD
2) Add a new transformation call it "Set Variable" as the first step after the start of your job
3) Create a variable that will be accessible to all your other transformations that contains the value of the current jobs batch id
3a) ADD A GET SYSTEM INFO STEP. GIVE A NAME TO YOUR FIELD - "parentJobBatchID" AND TYPE OF "parent job batch ID"
3b) ADD A SET VARIABLES STEP AFTER THE GET SYSTEM INFO STEP. DRAW A HOP FROM THE GET SYSTEM INFO STEP TO THE SET VARIABLES STEP AS ITS MAIN OUTPUT
3c) IN THE SET VARIABLES STEP SET FIELDNAME: "parentJobBatchID", SET A VARIABLE NAME - "myJobBatchID", VARIABLE SCOPE TYPE "Valid in the Java Virtual Machine", LEAVE DEFAULT VALUE EMPTY
And that's it. After that, you can go back to your job and add subsequent transformations and steps and they will all be able to access the variable you defined by substituting ${myJobBatchID} or whatever you chose to name it.
IT IS IMPORTANT THAT THE SET VARIABLES STEP IS THE ONLY THING THAT HAPPENS IN THE "Set Variables" TRANSFORMATION AND ANYTHING ELSE YOU WANT TO ACCESS THAT VARIABLE IS ADDED ONLY TO OTHER TRANSFORMATIONS CALLED BY THE JOB. This is because transformations in Pentaho are multi-threaded and you cannot guarantee that the set variables step will happen before other activities in that transformation. The parent job, however, executes sequentially so you can be assured that once you establish the variable containing parent job batch ID in the first transformation of the job that all other transformaitons and job steps will be able to use that variable.
You can test that it worked before you add other functionality by adding a "Write To Log" step after the Set Variables transformation that writes the variable ${myJobBatchID} to the log for you to view and confirm it is working.