Deleting and Merging Snapshots in RTC - rtc

I have 2 stream, one main (mnt) and one patch stream (4.5_patch). I had created a snapshot on 4.5_patch as 4.6_patch_snap. I then deleted the 4.5_patch stream, it gave me a warning of content being deleted and asked for me to merge the snapshots with some other stream. Then I chose, mnt stream for merging and deleted the 4.5_patch stream. The 4.6_patch_snap is now showing in mnt stream. Basically wanted to merge the 2 streams.
Does the mnt stream would have all the changes that were done on 4.6_patch_snap from 4.5_patch stream?
Thank you for your help.

I then deleted the 4.5_patch stream, it gave me a warning of content being deleted and asked for me to merge the snapshots with some other stream.
Then I chose, mnt stream for merging and deleted the 4.5_patch stream.
That is when you merged 4.5_patch stream to mnt.
The 4.6_patch_snap is now showing in mnt stream
The 4.6_patch_snap is the result of that merge.
Basically wanted to merge the 2 streams.
Too late: you already did.

Related

Get Flink FileSystem filename for secondary sink

This is for Flink on KDA and so limited to version 1.13
The desired flow of the application:
ingest from kinesis/kafka source
sink to s3
post s3 sink get objectkey/filename and publish to kafka (inform other processes that a file is ready to be examined)
The first 2 steps are simple enough. The 3rd is the tricky one.
From my understanding the S3 object key will be made of the following:
bucket partition (lets assume the defaul here of processing date)
filename. The filename is made up of + + + . It then goes through a series of changes - in-progress, pending and final.
The file will be in a finished state when a checkpoint occurs and all pending files are move to finished.
What I would like is this information as a trigger to kafka publish.
On checkpoint give me a list of all the files(object keys) that have moved to a finished state. These can then be put on a kafka topic.
Is this possible?
Thanks in advance

Am I able to view a list of devices by partition in iothub?

I have 2 nodes of a cluster receiving messages from iothub. I split their responsibility by partition. Node 1 reads from partitions 1,3,5,7,9 and the other 2,4,6,8, and 0. Recently, my partition 8 stops responding until I stop my code and restart it. It seems like a device is sending a message that locks up the partition. What I want to do is list all devices in my partition 8. Is that possible? Is there a cloud shell command to get those devices in a list?
Not sure this will help you, but you can see the partition on the incoming messages. For example you could use Azure Stream Analytics to see the partitions using this query:
Select GetMetadataPropertyValue(IoTHub, '[IoTHub].[ConnectionDeviceId]') as DeviceId, partitionId
from IoTHub
Also, if you run locally in VisualStudio it will tell you which device is sending malformed JSON. eg.
[Warning] 10/21/2021 9:12:54 AM : User Warning Source 'IoTHub' had 1 occurrences of kind 'InputDeserializerError.InvalidData' between processing times '2021-10-21T15:12:50.5076449Z' and '2021-10-21T15:12:50.5712076Z'. Could not deserialize the input event(s) from resource 'Partition: [1], Offset: [455266583232], SequenceNumber: [634800], DeviceId: [DeviceName]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format
Also check your "Activity Log" blade in the ASA job. It may have more details for you.

How can I set/reset/change the stream top item id in Redis?

While I was doing some testing in a Redis stream, I added a value into it with a high ID.
XADD mystream 9999999999999999-1 field value
Now I've found that this is the stream's top item, and trying to add anything with an automatic ID gets me
9999999999999999-2
Trying to add a stream with any ID lower than this results in an error:
(error) ERR The ID specified in XADD is equal or smaller than the target stream top item
I can just reset the stream back to my previous save of it, though I'm curious if there's any way to undo this action otherwise or reset the stream top item ID.

code delivery in stream impact on another stream

If code is delivered to a stream will it have any impact on another stream which has the same component.
eg :
stream 1
Comp 1 - baseline 1
Stream 2
Comp 1- baseline 1
If a create a repo workpace out of stream 2 and make code changes and deliver to Stream 2 will the change be available in stream 1.
Are the components same or two different copies?
Are the components same or two different copies?
They are the same component.
But each stream only display the LATEST changesets delivered for that component.
That means delivering new change sets on Stream2 (and making a new baseline) has no effect on the same component on Stream1.

Restart my delta loading after delete the infopackage in PSA by mistake

here i have got one issue.can some one please help me to resolve this.
i was trying to extract some data to DS 0FI_AP_6...
then in InfoPackage Monitor I can see like..
-->Requests (messages): Everything OK
-->Extraction (messages): Everything OK
-->Transfer (IDocs and TRFC): Missing messages or warnings
-->Info IDoc 2 : sent, not arrived ; IDoc ready for dispatch (ALE service)
Data Package 1 : 23752 Records arrived in BW
Data Package 2 : 15216 Records arrived in BW
Request IDoc : Application document posted
Info IDoc 1 : Application document posted
Info IDoc 3 : Application document posted
Info IDoc 4 : Application document posted
-->Processing (data packet): Everything OK
Data Package 1 ( 38672 Records ) : Everything OK
in Status Menu I am having message like...
Missing data packages for PSA Table
Diagnosis
Data packets are missing from PSA Table . BI processing does not
return any errors. The data transport from the source system to BI was
probably incorrect.
Procedure
Check the tRFC overview in the source system.
You access this log using the wizard or following the menu path
"Environment -> Transact. RFC -> Source System".
Error handling:
If the tRFC is incorrect, resolve the errors listed there.
Check that the source system is connected properly to BI. In
particular, check the remote user authorizations in BI.
Please suggest me how to resolve this issue...
thanks in advance for your help and quick reply is much appreciated.
But what the worst thing is I deleted the infopackage in PSA by mistake.
In the normal case, if I repeat the process again, the delta load would be OK, but now the delta load remains error.
so gurus,
1. how can I restart my delta loading correctly?
2. I want to modify the timestamp in the delta table, but how to do it ?
Go to T-Code RSA7 in the source system. This will tell you the date/timestamp that the delta is set to. If the date was changed to a range that no longer works then you will need to re-initialize the datasource in the BW system side. However, the Delta date may still be fine becauase it may have never been changed when you tried to first do your load because of the connection issues.
You can create a new infopackage and set the update to Initialize Datasource with Data Transfer. This will essentially run a full load from the datasource and then reset the delta pointer date/timestamp to when you ran it. This way you will capture all the data that you needed and anything that was already in the PSA should be overwritten.
Also note that you should delete or set the request status to red on the previous request that may contain bad data in the PSA.
From the original error it seems like you are having an RFC connection issue between the datasource and BW. Contact your BASIS support and have them check the connection to make sure it is good. To ensure that your datasource is extracting properly you can run t-code RSA3 on it in the source system. This will ensure that the extraction of data is working properly.