IBM BPM 8.5 Multi-instance sequence flow by custom order - sequence

Have loan task (see example below) which is separating by multi-instance loop:
loans[
[loanNo:1, dueDate: 2020-10-10],
[loanNo:2, dueDate: 2020-05-05],
[loanNo:3, dueDate: 2020-07-07]
]
How to make sequence loop to loop by custom order, not by index (0,1,2) but by dueDate so that first element will be closest date 2020-05-05, then 2020-07-07 and etc..

You will have to order your array by dueDate after to pass it to your multi-instance loop.
You could insert in your process a script step before your multi-instance task that do this ordering:
tw.local.orderedLoans = loans.sort(function(a, b) {
return a.dueDate.localeCompare(b.dueDate)
});
And then pass the tw.local.orderedLoans to the task

Related

How to filter a date-field with a swift vapor-fluent query

To avoid multiple inserts of the same person in a database, I wrote the following function:
func anzahlDoubletten(_ req: Request, nname: String, vname: String, gebTag: Date)
async throws -> Int {
try await
Teilnehmer.query(on: req.db)
.filter(\.$nname == nname)
.filter(\.$vname == vname)
.filter(\.$gebTag == gebTag)
.count()
}
The function always returns 0, even if there are multiple records with the same surname, prename and birthday in the database.
Here is the resulting sql-query:
[ DEBUG ] SELECT COUNT("teilnehmer"."id") AS "aggregate" FROM "teilnehmer" WHERE "teilnehmer"."nname" = $1 AND "teilnehmer"."vname" = $2 AND "teilnehmer"."geburtstag" = $3 ["neumann", "alfred e.", 1999-09-09 00:00:00 +0000] [database-id: psql, request-id: 1AC70C41-EADE-43C2-A12A-99C19462EDE3] (FluentPostgresDriver/FluentPostgresDatabase.swift:29)
[ INFO ] anzahlDoubletten=0 [request-id: 1AC70C41-EADE-43C2-A12A-99C19462EDE3] (App/Controllers/TeilnehmerController.swift:49)
if I query directly I obtain:
lwm=# select nname, vname, geburtstag from teilnehmer;
nname | vname | geburtstag
---------+-----------+------------
neumann | alfred e. | 1999-09-09
neumann | alfred e. | 1999-09-09
neumann | alfred e. | 1999-09-09
neumann | alfred e. | 1999-09-09
so count() should return 4 not 0:
lwm=# select count(*) from teilnehmer where nname = 'neumann' and vname = 'alfred e.' and geburtstag = '1999-09-09';
count
-------
4
My DateFormatter is defined like so:
let dateFormatter = ISO8601DateFormatter()
dateFormatter.formatOptions = [.withFullDate, .withDashSeparatorInDate]
And finally the attribute "birthday" in my model:
...
#Field(key: "geburtstag")
var gebTag: Date
...
I inserted the 4 alfreds in my database using the model and fluent, passing the birthday "1999-09-09" as a String and fluent inserted all records correctly.
But .filter(\.$gebTag == gebTag) seems to return constantly 'false'.
Is it at all possible to use .filter() with data types other than String?
And if so, what am I doing wrong?
Many thanks for your help
Michael
The problem you've hit is that you're storing only dates whereas you're filtering on dates with times. Unfortunately there's no native way to store just a date. However there are a few options.
The easiest way is to change the date field to a String and then use your date formatter (make sure you remove the time part) to convert the query option to a String.
I am guessing slightly here, but I suspect that your table was not created by a Migration? If it had been, your geburtstag field would include a time component as this is the default and you would have spotted the problem quickly.
In any event, the filter is actually filtering on the time component of gebTag as well as the date. This is why it is returning zero.
I suggest converting the geburtstag to a type that includes the time and ensuring that the time component is set to 0:00:00 when you store it. You can reset the time component to 'midnight' using something like this:
extension Date {
var midnight: Date { return Calendar.current.date(bySettingHour: 0, minute: 0, second: 0, of: self)! }
}
Then change your filter to:
.filter(\.$gebTag == gebTag.midnight)
Alternatively, just use the static method in Calendar:
.filter(\.$gebTag == Calendar.startOfDay(for:gebTag))
I think this is the most straightforward way of doing it.

FaunaDB: How to get documents created in the last hour?

How to get all documents created in the last hour?
I found Paginate() parameter ts, but it only returns documents created earlier, not after.
That's strange, this code:
Paginate(Documents(Collection("fweets")), {
events: true,
after: Time("2020-05-22T19:12:07.121247Z")
})
should return the events after the given timestamp, do you encounter an issue trying to run such code?
The events from that result will include create and delete events. An alternative way is to create an index on 'ts' but this will also give you documents that were updated after the given timestamp.
Paginate(
Range(
Match(Index("fweets_after_ts")),
ToMicros(Time("2020-05-22T19:12:07.121247Z")),
null
)
)
A popular approach is to get the events of these created/updated docs then by running Pagiante with events again on top of that result. Which you can do by wrapping it in a map + paginate with events: true.
Map(Paginate(
Range(
Match(Index("fweets_after_ts")),
ToMicros(Time("2020-05-22T19:12:07.121247Z")),
null
)
),
Lambda(['ts', 'ref'], Paginate(Var('ref'), {events: true, after: Time("2020-05-22T19:12:07.121247Z")}))
)

Google Pub/Sub to Dataflow, avoid duplicates with Record ID

I'm trying to build a Streaming Dataflow Job which read events from Pub/Sub and write them into BigQuery.
According to the documentation, Dataflow can detect duplicate messages delivery if a Record ID is used (see: https://cloud.google.com/dataflow/model/pubsub-io#using-record-ids)
But even using this Record ID, I still have some duplicates
(around 0.0002%).
Did I miss something ?
EDIT:
I use Spotify Async PubSub Client to publish messages with the following snipplet:
Message
.builder()
.data(new String(Base64.encodeBase64(json.getBytes())))
.attributes("myid", id, "mytimestamp", timestamp.toString)
.build()
Then I use Spotify scio to read the message from pub/sub and save it to DataFlow:
val input = sc.withName("ReadFromSubscription")
.pubsubSubscription(subscriptionName, "myid", "mytimestamp")
input
.withName("FixedWindow")
.withFixedWindows(windowSize) // apply windowing logic
.toWindowed // convert to WindowedSCollection
//
.withName("ParseJson")
.map { wv =>
wv.copy(value = TableRow(
"message_id" -> (Json.parse(wv.value) \ "id").as[String],
"message" -> wv.value)
)
}
//
.toSCollection // convert back to normal SCollection
//
.withName("SaveToBigQuery")
.saveAsBigQuery(bigQueryTable(opts), BQ_SCHEMA, WriteDisposition.WRITE_APPEND)
The Window size is 1 minute.
After only few seconds injecting messages I already have duplicates in BigQuery.
I use this query to count duplicates:
SELECT
COUNT(message_id) AS TOTAL,
COUNT(DISTINCT message_id) AS DISTINCT_TOTAL
FROM my_dataset.my_table
//returning 273666 273564
And this one to look at them:
SELECT *
FROM my_dataset.my_table
WHERE message_id IN (
SELECT message_id
FROM my_dataset.my_table
GROUP BY message_id
HAVING COUNT(*) > 1
) ORDER BY message_id
//returning for instance:
row|id | processed_at | processed_at_epoch
1 00166a5c-9143-3b9e-92c6-aab52601b0be 2017-02-02 14:06:50 UTC 1486044410367 { ...json1... }
2 00166a5c-9143-3b9e-92c6-aab52601b0be 2017-02-02 14:06:50 UTC 1486044410368 { ...json1... }
3 00354cc4-4794-3878-8762-f8784187c843 2017-02-02 13:59:33 UTC 1486043973907 { ...json2... }
4 00354cc4-4794-3878-8762-f8784187c843 2017-02-02 13:59:33 UTC 1486043973741 { ...json2... }
5 0047284e-0e89-3d57-b04d-ebe4c673cc1a 2017-02-02 14:09:10 UTC 1486044550489 { ...json3... }
6 0047284e-0e89-3d57-b04d-ebe4c673cc1a 2017-02-02 14:08:52 UTC 1486044532680 { ...json3... }
The BigQuery documentation states that there may be rare cases where duplicates arrive:
"BigQuery remembers this ID for at least one minute" -- if Dataflow takes more than one minute before retrying the insert BigQuery may allow the duplicate in. You may be able to look at the logs from the pipeline to determine if this is the case.
"In the rare instance of a Google datacenter losing connectivity unexpectedly, automatic deduplication may not be possible."
You may want to try the instructions for manually removing duplicates. This will also allow you to see the insertID that was used with each row to determine if the problem was on the Dataflow side (generating different insertIDs for the same record) or on the BigQuery side (failing to deduplicate rows based on their insertID).

PIG FILTER relation with next row the same relation

i'm searching for a long time now to solve my problem but nearly found nothing helpful.
Hopefully some of you can give me a tip.
I have a relation A with the following format: username, timestamp, ip
For example:
Harald 2014-02-18T16:14:49.503Z 123.123.123.123
Harald 2014-02-18T16:14:51.503Z 123.123.123.123
Harald 2014-02-18T16:14:55.503Z 321.321.321.321
And i want to find out, who changed his ip adress in less then 5 seconds. So the second and the third row should be interesting.
I want do group the relation by username und want to compare the timestamp of the actuall row with the next row. if the ip adress isnt the same and the timestamp is less then 5 seconds bigger, this should be at the output.
could someone help me with that issue?
regards.
first i want to thank you for your time.
but i actually stuck at the Sessionize part.
this is my data comming in:
aoebcu 2014-02-19T14:23:17.503Z 220.61.65.25
aoebcu 2014-02-19T14:23:14.503Z 222.117.144.19
aoebcu 2014-02-19T14:23:14.503Z 222.117.144.19
jekgru 2014-02-19T14:23:14.503Z 213.56.157.109
zmembx 2014-02-19T14:23:12.503Z 199.188.198.91
qhixcg 2014-02-19T14:23:11.503Z 203.40.104.119
and my code till now looks like this:
hijack_Reduced = FOREACH finalLogs GENERATE ClientUserName, timestamp, OriginalClientIP;
hijack_Filtered = FILTER hijack_Reduced BY OriginalClientIP != '-';
hijack_Sessionized = FOREACH (GROUP hijack_Filtered BY ClientUserName) {
views = ORDER hijack_Filtered BY timestamp;
GENERATE FLATTEN(Sessionize(views)) AS (ClientUserName,timestamp,OriginalClientIP,session_id);
}
but when i run this script, i got the following error Message:
15:36:22 ERROR -
org.apache.pig.tools.pigstats.SimplePigStats.setBackendException(542)
| ERROR 0: Exception while executing [POUserFunc (Name:
POUserFunc(datafu.pig.sessions.Sessionize)[bag] - scope-199 Operator
Key: scope-199) children: null at []]:
java.lang.IllegalArgumentException: Invalid format: "aoebcu"
i already tried a lot, but nothing worked.
do you got an idea?
Regards
While you could write a UDF for this, you can actually make use of the UDFs already available in Apache DataFu to solve this.
My solution involves applying sessionization to the data. Basically you look at consecutive events and assign each event a session ID. If the time elapsed between two events exceeds a specified amount of time, in your case 5 seconds, then the next event gets a new session ID. Otherwise consecutive events get the same session ID. Once each event is assigned its session ID the rest is easy. We group by session ID and look for sessions that have more than one distinct IP address.
I'll walk through my solution.
Suppose you have the following input data. Both Harold and Kumar change their IP addresses. But Harold does it within 5 seconds, while Kumar does not. So the output of our script should just be simply "Harold".
Harold,2014-02-18T16:14:49.503Z,123.123.123.123
Harold,2014-02-18T16:14:51.503Z,123.123.123.123
Harold,2014-02-18T16:14:55.503Z,321.321.321.321
Kumar,2014-02-18T16:14:49.503Z,123.123.123.123
Kumar,2014-02-18T16:14:55.503Z,123.123.123.123
Kumar,2014-02-18T16:15:05.503Z,321.321.321.321
Load the data
data = LOAD 'input' using PigStorage(',')
AS (user:chararray,time:chararray,ip:chararray);
Now define a couple UDFs from DataFu. The Sessionize UDF performs sessionization as I described earlier. The DistinctBy UDF will be used to find the distinct IP addresses within each session.
define Sessionize datafu.pig.sessions.Sessionize('5s');
define DistinctBy datafu.pig.bags.DistinctBy('1');
Group the data by user, sort by time, and apply the Sessonize UDF. Note that the timestamp must be the first field, as this is what Sessionize expects. This UDF appends a session ID to each tuple.
data = FOREACH data GENERATE time,user,ip;
data_sessionized = FOREACH (GROUP data BY user) {
views = ORDER data BY time;
GENERATE flatten(Sessionize(views)) as (time,user,ip,session_id);
}
Now that the data is sessionized, we can group by the user and session. I group by user too because I want to spit this value back out. We pass the bag of events into the DistinctBy UDF. Check the documentation of this UDF for a more detailed description. But essentially we will get as many tuples as there are distinct IP addresses per session. Note that I have removed the time from the relation below. This is because 1) it isn't needed, and 2) the DistinctBy in 1.2.0 of DataFu has a bug when handling fields containing dashes, as the time field does.
data_sessionized = FOREACH data_sessionized GENERATE user,ip,session_id;
data_sessionized = FOREACH (GROUP data_sessionized BY (user, session_id)) GENERATE
group.user as user,
SIZE(DistinctBy(data_sessionized)) as distinctIpCount;
Now select all the sessions that had more than one distinct IP address and return the distinct users for these sessions.
data_sessionized = FILTER data_sessionized BY distinctIpCount > 1;
data_sessionized = FOREACH data_sessionized GENERATE user;
data_sessionized = DISTINCT data_sessionized;
This produces simply:
Harold
Here is the full source code, which you should be able to paste directly into the DataFu unit tests and run:
/**
define Sessionize datafu.pig.sessions.Sessionize('5s');
define DistinctBy datafu.pig.bags.DistinctBy('1'); -- distinct by ip
data = LOAD 'input' using PigStorage(',') AS (user:chararray,time:chararray,ip:chararray);
data = FOREACH data GENERATE time,user,ip;
data_sessionized = FOREACH (GROUP data BY user) {
views = ORDER data BY time;
GENERATE flatten(Sessionize(views)) as (time,user,ip,session_id);
}
data_sessionized = FOREACH data_sessionized GENERATE user,ip,session_id;
data_sessionized = FOREACH (GROUP data_sessionized BY (user, session_id)) GENERATE
group.user as user,
SIZE(DistinctBy(data_sessionized)) as distinctIpCount;
data_sessionized = FILTER data_sessionized BY distinctIpCount > 1;
data_sessionized = FOREACH data_sessionized GENERATE user;
data_sessionized = DISTINCT data_sessionized;
STORE data_sessionized INTO 'output';
*/
#Multiline private String sessionizeUserIpTest;
private String[] sessionizeUserIpTestData = new String[] {
"Harold,2014-02-18T16:14:49.503Z,123.123.123.123",
"Harold,2014-02-18T16:14:51.503Z,123.123.123.123",
"Harold,2014-02-18T16:14:55.503Z,321.321.321.321",
"Kumar,2014-02-18T16:14:49.503Z,123.123.123.123",
"Kumar,2014-02-18T16:14:55.503Z,123.123.123.123",
"Kumar,2014-02-18T16:15:05.503Z,321.321.321.321"
};
#Test
public void sessionizeUserIpTest() throws Exception
{
PigTest test = createPigTestFromString(sessionizeUserIpTest);
this.writeLinesToFile("input",
sessionizeUserIpTestData);
List<Tuple> result = this.getLinesForAlias(test, "data_sessionized");
assertEquals(result.size(),1);
assertEquals(result.get(0).get(0),"Harold");
}

Copy contents of one table into another in rails

I have two tables in rails..
pending_products
processed_products
In pending products there's a status field..
When a record gets added to the pending table (or updated) the status is set to 1.
When I want to process I change all the 1's to 2 and then select all the 2's.
When I'm done with all the records, I change all the 2's to 1000.. (seemed like a nice number to say done).
(If you're wondering why I set to 2, it's so that if a feed comes in while I'm processing it wouldn't update that record, because the status would have been set to 1 by the feed)
Before I change all the 2's to 1000, I want to insert into the processed table.
Now I can do an "insert into" in pure SQL, but I am wondering if there's a rails way to do this.. Something more elegant than raw SQL.
Something like this might work for you:
class PendingProduct < ActiveRecord::Base
OUTSTANDING = 1
PROCESSING = 2
PROCESSED = 1000
scope :outstanding, where(status: OUTSTANDING)
def process
transaction do
self.status = PROCESSING
self.save!
# do whatever processing you need to do...
# ...then create your ProcessedProduct record...
ProcessedProduct.create!( ... )
# ...and finally update this PendingProduct
self.status = PROCESSED
self.save!
end
end
end
PendingProduct.outstanding.each(&:process)
This is arguably more "elegant" than raw SQL, but it's guaranteed to be slower.