How to collect signatures from multiple state participants in a corda flow? - kotlin

I'm working on a use case where there are three party involved, let's say PartyA, PartyB and PartyC.
In this scenario,
PartyA issue a dealState (A is the only participant),
PartyA sells it to PartyB (A,B are the participants),
now PartyB wants to sell this state to PartyC but we need the signatures from both A and B, plus the one from C accepting the selling process.
How can I gather the signature from the original issuer PartyA in the third scenario in order to make the flow work?
The code in the flow is this one (Im selling as PartyB)
val newOwnerFlow = initiateFlow(PartyC)
progressTracker.currentStep = GATHERING_SIGS
println("Finished gathering signatures stage 9")
// Send the state to the counterparty, and receive it back with their signature.
val fullySignedTx = subFlow(CollectSignaturesFlow(partSignedTx, setOf(newOwnerFlow), GATHERING_SIGS.childProgressTracker()))
// Stage 10.
progressTracker.currentStep = FINALISING_TRANSACTION
println("Finalizing transaction")
// Notarise and record the transaction in both parties' vaults.
return subFlow(FinalityFlow(fullySignedTx, FINALISING_TRANSACTION.childProgressTracker()))
How do I make PartyA signing the transaction?

After some experimenting I found out the problem is the following:
you have to create a setOf(flowSessions) mapping each participant to its correspondent initiateFlow() that has to be passed to the CollectSignaturesFlow(), the syntax is like the following:
val participantsParties = dealState.participants.map { serviceHub.identityService.wellKnownPartyFromAnonymous(it)!! }
val flowSessions = (participantsParties - myIdentity).map { initiateFlow(it) }.toSet()
progressTracker.currentStep = GATHERING_SIGS
println("Finished gathering signatures stage 9")
// Send the state to the counterparty, and receive it back with their signature.
val fullySignedTx = subFlow(CollectSignaturesFlow(partSignedTx, flowSessions, GATHERING_SIGS.childProgressTracker()))

Related

Scalding Unit Test - How to Write A Local File?

I work at a place where scalding writes are augmented with a specific API to track dataset meta data. When converting from normal writes to these special writes, there are some intricacies with respect to Key/Value, TSV/CSV, Thrift ... datasets. I would like to compare the binary file is the same prior to conversion and after conversion to the special API.
Given I cannot provide the specific api for the metadata-inclusive writes, I only ask how can I write a unit test for .write method on a TypedPipe?
implicit val timeZone: TimeZone = DateOps.UTC
implicit val dateParser: DateParser = DateParser.default
implicit def flowDef: FlowDef = new FlowDef()
implicit def mode: Mode = Local(true)
val fileStrPath = root + "/test"
println("writing data to " + fileStrPath)
TypedPipe
.from(Seq[Long](1, 2, 3, 4, 5))
// .map((x: Long) => { println(x.toString); System.out.flush(); x })
.write(TypedTsv[Long](fileStrPath))
.forceToDisk
The above doesn't seem to write anything to local (OSX) disk.
So I wonder if I need to use a MiniDFSCluster something like this:
def setUpTempFolder: String = {
val tempFolder = new TemporaryFolder
tempFolder.create()
tempFolder.getRoot.getAbsolutePath
}
val root: String = setUpTempFolder
println(s"root = $root")
val tempDir = Files.createTempDirectory(setUpTempFolder).toFile
val hdfsCluster: MiniDFSCluster = {
val configuration = new Configuration()
configuration.set(MiniDFSCluster.HDFS_MINIDFS_BASEDIR, tempDir.getAbsolutePath)
configuration.set("io.compression.codecs", classOf[LzopCodec].getName)
new MiniDFSCluster.Builder(configuration)
.manageNameDfsDirs(true)
.manageDataDfsDirs(true)
.format(true)
.build()
}
hdfsCluster.waitClusterUp()
val fs: DistributedFileSystem = hdfsCluster.getFileSystem
val rootPath = new Path(root)
fs.mkdirs(rootPath)
However, my attempts to get this MiniCluster to work haven't panned out either - somehow I need to link the MiniCluster with the Scalding write.
Note: The Scalding JobTest framework for unit testing isn't going to work due actual data written is sometimes wrapped in bijection codec or setup with case class wrappers prior to the writes made by the metadata-inclusive writes APIs.
Any ideas how I can write a local file (without using the Scalding REPL) with either Scalding alone or a MiniCluster? (If using the later, I need a hint how to read the file.)
Answering ... There is an example of how to use a mini cluster for exactly reading and writing to HDFS. I will be able to cross read with my different writes and examine them. Here it is in the tests for scalding's TypedParquet type
HadoopPlatformJobTest is an extension for JobTest that uses a MiniCluster.
With some hand-waiving on detail in the link, the bulk of the code is this:
"TypedParquetTuple" should {
"read and write correctly" in {
import com.twitter.scalding.parquet.tuple.TestValues._
def toMap[T](i: Iterable[T]): Map[T, Int] = i.groupBy(identity).mapValues(_.size)
HadoopPlatformJobTest(new WriteToTypedParquetTupleJob(_), cluster)
.arg("output", "output1")
.sink[SampleClassB](TypedParquet[SampleClassB](Seq("output1"))) {
toMap(_) shouldBe toMap(values)
}
.run()
HadoopPlatformJobTest(new ReadWithFilterPredicateJob(_), cluster)
.arg("input", "output1")
.arg("output", "output2")
.sink[Boolean]("output2")(toMap(_) shouldBe toMap(values.filter(_.string == "B1").map(_.a.bool)))
.run()
}
}

Hangfire executes job twice

I am using Hangfire.AspNetCore 1.7.17 and Hangfire.MySqlStorage 2.0.3 for software that is currently in production.
Now and then, we get a report of jobs being executed twice, despite the usage of the [DisableConcurrentExecution] attribute with a timeout of 30 seconds.
It seems that as soon as those 30 seconds have passed, another worker picks up that same job again.
The code is fairly straightforward:
public async Task ProcessPicking(HttpRequest incomingRequest)
{
var filePath = await StoreStreamAsync(incomingRequest, TriggerTypes.Picking);
var picking = await XmlHelper.DeserializeFileAsync<Picking>(filePath);
// delay with 20 minutes so outbound-out gets the chance to be send first
BackgroundJob.Schedule(() => StartPicking(picking), TimeSpan.FromMinutes(20));
}
[TriggerAlarming("[IMPORTANT] Failed to parse picking message to **** object.")]
[DisableConcurrentExecution(30)]
public void StartPicking(Picking picking)
{
var orderlinePickModels = picking.ToSalesOrderlinePickQuantityRequests().ToList();
var orderlineStatusModels = orderlinePickModels.ToSalesOrderlineStatusRequests().ToList();
var isParsed = DateTime.TryParse(picking.Order.UnloadingDate, out var unloadingDate);
for (var i = 0; i < orderlinePickModels.Count; i++)
{
// prevents bugs with usage of i in the background jobs
var index = i;
var id = BackgroundJob.Enqueue(() => SendSalesOrderlinePickQuantityRequest(orderlinePickModels[index], picking.EdiReference));
BackgroundJob.ContinueJobWith(id, () => SendSalesOrderlineStatusRequest(
orderlineStatusModels.First(x=>x.SalesOrderlineId== orderlinePickModels[index].OrderlineId),
picking.EdiReference, picking.Order.PrimaryReference, isParsed ? unloadingDate : DateTime.MinValue));
}
}
[TriggerAlarming("[IMPORTANT] Failed to send order line pick quantity request to ****.")]
[AutomaticRetry(Attempts = 2)]
[DisableConcurrentExecution(30)]
public void SendSalesOrderlinePickQuantityRequest(SalesOrderlinePickQuantityRequest request, string ediReference)
{
var audit = new AuditPostModel
{
Description = $"Finished job to send order line pick quantity request for item {request.Itemcode}, part of ediReference {ediReference}.",
Object = request,
Type = AuditTypes.SalesOrderlinePickQuantity
};
try
{
_logger.LogInformation($"Started job to send order line pick quantity request for item {request.Itemcode}.");
var response = _service.SendSalesOrderLinePickQuantity(request).GetAwaiter().GetResult();
audit.StatusCode = (int)response.StatusCode;
if (!response.IsSuccessStatusCode) throw new TriggerRequestFailedException();
audit.IsSuccessful = true;
_logger.LogInformation("Successfully posted sales order line pick quantity request to ***** endpoint.");
}
finally
{
Audit(audit);
}
}
It schedules the main task (StartPicking) that creates the objects required for the two subtasks:
Send picking details to customer
Send statusupdate to customer
The first job is duplicated. Perhaps the second job as well, but this is not important enough to care about as it just concerns a statusupdate. However, the first job causes the customer to think that more items have been picked than in reality.
I would assume that Hangfire updates the state of a job to e.g. in progress, and checks this state before starting a job. Is my time-out on the disabled concurrent execution too low? Is it possible in this scenario that the database connection to update the state takes about 30 seconds (to be fair, it is running on a slow server with ~8GB Ram, 6 vCores) due to which the second worker is already picking up the job again?
Or is this a Hangfire specific issue that must be tackled?

Akka - Unable to send Discriminated Unions as messages in F#

Akka - Discriminated Unions as messages in F#
I am unable to use discriminated unions as messages to akka actors. If anyone can point me at an example that does this, it would be much appreciated.
My own attempt at this is at git#github.com:Tweega/AkkaMessageIssue.git. (snippets below). It is a cutdown version of a sample found at https://github.com/rikace/AkkaActorModel.git (Chat project)
Problem
The DU message never finds its target on the server actor, but is sent to the deadletter box. If I send Objects, instead, they do arrive.
If I send a DU, but set my server actor to listen for generic Objects, the message does arrive, but its type is
seq [seq [seq []]
and I can't get at underlying DU.
The DU I am trying to send as message
type PrinterJob =
| PrintThis of string
| Teardown
The client code
let system = System.create "MyClient" config
let chatClientActor =
spawn system "ChatClient" <| fun mailbox ->
let server = mailbox.Context.ActorSelection("akka.tcp://MyServer#localhost:8081/user/ChatServer")
let rec loop nick = actor {
let! (msg:PrinterJob) = mailbox.Receive()
server.Tell(msg)
return! loop nick
}
loop ""
while true do
let input = Console.ReadLine()
chatClientActor.Tell(PrintThis(input))
Messages are forwarded to the client from console input
while true do
let input = Console.ReadLine()
chatClientActor.Tell(PrintThis(input))
The server code
let system = System.create "MyServer" config
let chatServerActor =
spawn system "ChatServer" <| fun (mailbox:Actor<_>) ->
let rec loop (clients:Akka.Actor.IActorRef list) = actor {
let! (msg:PrinterJob) = mailbox.Receive()
printfn "Received %A" msg //Received seq [seq [seq []]; seq [seq [seq []]]] ???
match msg with
| PrintThis str ->
Console.WriteLine("Printing: {0} Do we get this?", str)
return! loop clients
| Teardown ->
Console.WriteLine("Tearing down now")
return! loop clients
}
loop []
Dependencies
(I am not using paket here) - PM commands below:
Install-Package Akka -Version 1.4.23
Install-Package Akka.Remote -Version 1.4.23
Install-Package Akka.FSharp -Version 1.4.23
I am hosting the application in net5.0
Constructor argument names - oddity?
When passing in class instances as objects, akka seems to be sensitive to the name of constructor parameters. The message gets handled, but the data is not copied across from client to server. If you have a property called Username, the constructor parameter cannot be, for example, uName, otherwise its value is null when it reaches the server. Code for this is in branch params.
type DoesWork(montelimar: string) =
member x.Montelimar = montelimar
type DoesNotWork(montelimaro: string) =
member x.Montelimar = montelimaro
I opened an issue in the Akka.NET repository: https://github.com/akkadotnet/akka.net/issues/5194
And added a detailed reproduction for this: https://github.com/akkadotnet/akka.net/pull/5196
But it looks like Newtonsoft.Json really can't perform this deserialization without being given a type hint, which Akka.NET's network serialization does not do by default for JSON:
type TestUnion =
| A of string
| B of int * string
type TestUnion2 =
| C of string * TestUnion
| D of int
[<Fact(Skip="JSON.NET really does not support even basic DU serialization")>]
member _.``JSON.NET must serialize DUs`` () =
let du = C("a-11", B(11, "a-12"))
let settings = new JsonSerializerSettings()
settings.Converters.Add(new DiscriminatedUnionConverter())
let serialized = JsonConvert.SerializeObject(du, settings)
let deserialized = JsonConvert.DeserializeObject(serialized, settings)
Assert.Equal(du :> obj, deserialized)
That test will not pass and it doesn't use any of Akka.NET's infrastructure at all - so the default JSON serializer simply won't work for real-world F# use cases.
We can try changing the defaults of our serialization system to include a type hint, but that will take a lot of validation testing (for old Akka.Persistence data serialized without one).
A better solution, which my pull request validates, is to use Hyperion for polymorphic serialization instead - it will be similarly transparent to you but it has much more robust handling for complex types than Newtonsoft.Json and is actually faster: https://getakka.net/articles/networking/serialization.html#how-to-setup-hyperion-as-default-serializer

Get Transaction Id after completing Payment Objective c?

I was trying to get transation Id when paypal payment is done that is on didCompletePayment method, i can get only these details
Confirmation: {
client = {
environment = sandbox;
"paypal_sdk_version" = "2.12.2";
platform = iOS;
"product_name" = "PayPal iOS SDK";
};
response = {
"create_time" = "2017-02-01T07:40:43Z";
id = "PAY-4RK70135CF912010FLCIZB5A";
intent = sale;
state = approved;
};
"response_type" = payment;
}
I don't find transation ID here. Can any one suggest me how to get transation Id in detail so that i can save that to database.
I found some Curl concepts but i'm not sure where to start with.Please give me some suggestions.
Thanks in advance

How to use intermediate result of Reads with another Reads

The example I have is validating a credit card number string. The validations are 1) Issuer should exist for credit card number, and 2) Issuer should be an accepted one by merchant.
Here's work I have so far. Ideally, I would like to use the intermediate result Issuer from first Reads in the next reads. Is there a better way?
Reads.filter[String](ValidationError("Invalid Issuer")) { cardNumber =>
findIssuer(cardNumber).isDefined // Option[Issuer]
} andThen
Reads.filter[String](ValidationError("Issuer not accepted")) { cardNumber =>
// get issuer, then check issuer is accepted by merchant
}
It's not a direct answer, but you might consider to write this logic as for/yield expression:
val result: Either[String, Issuer] = for {
card <- json.validate[Card].asEither.leftMap(_ => "Reading error")
issuer <- findIssuer(card.number) //returns Either[String, _]
_ <- isAccepted(issuer) // returns Either[String,_]
} yield issuer
P.S. It's a gateway case to start using scalaz Validation.