Using multiple render pipelines in a single MTLRenderCommandEncoder: How to Synchronize MTLBuffer? - objective-c

Central Issue
I have two render pipelines in one single render command encoder. The first pipeline writes to a buffer which is used in the second pipeline. This does not seem to work and I expect it to be a synchronization problem. When I use one separate render command encoder for each render pipeline I get the desired result. Can this be solved with one single render command encoder or do I need two separate encoders to synchronize the buffer?
Here is the more specific case:
The first pipeline is a non-rasterizing pipeline only running a vertex shader to output to a MTLBuffer storing MTLDrawPrimitivesIndirectArguments to use for the drawPrimitives call for the second pipeline, which looks like this:
// renderCommandEncoder is MTLRenderCommandEncoder
// firstPipelineState and secondPipelineState are two different MTLRenderPipelineState
// indirectArgumentsBuffer is a MTLBuffer containing MTLDrawPrimitivesIndirectArguments
// numberOfVertices is number of vertices suited for first pipeline
// first pipeline
renderCommandEncoder.setRenderPipelineState(firstPipelineState)
renderCommandEncoder.setVertexBuffer(indirectArgumentsBuffer, offset: 0, index: 0)
renderCommandEncoder.drawPrimitives(type: .point, vertexStart: 0, vertexCount: numberOfVertices)
// second pipeline
renderCommandEncoder.setRenderPipelineState(secondPipelineState)
renderCommandEncoder.setVertexBuffer(secondPipelineBuffer, offset: 0, index: 0)
renderCommandEncoder.drawPrimitives(type: .point, indirectBuffer: indirectArgumentsBuffer, indirectBufferOffset: 0)
renderCommandEncoder.endEncoding()
How can I make sure that the indirectArgumentsBuffer has been written to by the first pipeline when issuing a call to drawPrimitives for the second pipeline, which uses and needs the contents of indirectArgumentsBuffer?

I believe you need to use separate encoders. In this (somewhat dated) documentation about function writes, only atomic operations are synchronized for buffers shared between draw calls.

Related

How conwert function for Pipeline

i have function list_for_android = list(df[df['device_os'] == 'Android'].device_brand.unique()) which should display a list of brands if its operating system is android. In its usual form, this function works fine, but I need to shove it into the Pipeline and it df[df['device_os'] == 'Android'] doesn't work in the Pipeline. When shape in Pipeline is called, it outputs (0.18), i.e. there is no data there at all, although there should be about 1,8 million rows. How do I redo this piece of code for Pipeline?

Changing the GemFire query ResultSender batch size

I am experiencing a performance issue related to the default batch size of the query ResultSender using client/server config. I believe the default value is 100.
If I run a simple query to get keys (with some order by columns due to the PARTITION Region type), this default batch size causes too many chunks being sent back for even 1000 records. In my tests, even the total query time is only less than 100 ms, however, the app takes more than 10 seconds to process those chunks.
Reading between the lines in your problem statement, it seems you are:
Executing an OQL query on a PARTITION Region (PR).
Running the query inside a Function as recommended when executing queries on a PR.
Sending batch results (as opposed to streaming the results).
I also assume since you posted exclusively in the #spring-data-gemfire channel, that you are using Spring Data GemFire (SDG) to:
Execute the query (e.g. by using the SDG GemfireTemplate; Of course, I suppose you could also be using the GemFire Query API inside your Function directly, too)?
Implemented the server-side Function using SDG's Function annotation support?
And, are possibly (indirectly) using SDG's BatchingResultSender, as described in the documentation?
NOTE: The default batch size in SDG is 0, NOT 100. Zero means stream the results individually.
Regarding #2 & #3, your implementation might look something like the following:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction", batchSize = "1000")
public List<SomeApplicationType> myFunction(FunctionContext functionContext) {
RegionFunctionContext regionFunctionContext =
(RegionFunctionContext) functionContext;
Region<?, ?> region = regionFunctionContext.getDataSet();
if (PartitionRegionHelper.isPartitionRegion(region)) {
region = PartitionRegionHelper.getLocalDataForContext(regionFunctionContext);
}
GemfireTemplate template = new GemfireTemplate(region);
String OQL = "...";
SelectResults<?> results = template.query(OQL); // or `template.find(OQL, args);`
List<SomeApplicationType> list = ...;
// process results, convert to SomeApplicationType, add to list
return list;
}
}
NOTE: Since you are most likely executing this Function "on Region", the FunctionContext type will actually be a RegionFunctionContext in this case.
The batchSize attribute on the SDG #GemfireFunction annotation (used for Function "implementations") allows you to control the batch size.
Of course, instead of using SDG's GemfireTemplate to execute queries, you can, of course, use the GemFire Query API directly, as mentioned above.
If you need even more fine grained control over "result sending", then you can simply "inject" the ResultSender provided by GemFire to the Function, even if the Function is implemented using SDG, as shown above. For example you can do:
#Component
class MyApplicationFunctions {
#GemfireFunction(id = "MyFunction")
public void myFunction(FunctionContext functionContext, ResultSender resultSender) {
...
SelectResults<?> results = ...;
// now process the results and use the `resultSender` directly
}
}
This allows you to "send" the results however you see fit, as required by your application.
You can batch/chunk results, stream, whatever.
Although, you should be mindful of the "receiving" side in this case!
The 1 thing that might not be apparent to the average GemFire user is that GemFire's default ResultCollector implementation collects "all" the results first before returning them to the application. This means the receiving side does not support streaming or batching/chunking of the results, allowing them to be processed immediately when the server sends the results (either streamed, batched/chunked, or otherwise).
Once again, SDG helps you out here since you can provide a custom ResultCollector on the Function "execution" (client-side), for example:
#OnRegion("SomePartitionRegion", resultCollector="myResultCollector")
interface MyApplicationFunctionExecution {
void myFunction();
}
In your Spring configuration, you would then have:
#Configuration
class ApplicationGemFireConfiguration {
#Bean
ResultCollector myResultCollector() {
return ...;
}
}
Your "custom" ResultCollector could return results as a stream, a batch/chunk at a time, etc.
In fact, I have prototyped a "streaming" ResultCollector implementation that will eventually be added to SDG, here.
Anyway, this should give you some ideas on how to handle the performance problem you seem to be experiencing. 1000 results is not a lot of data so I suspect your problem is mostly self-inflicted.
Hope this helps!
John,
Just to clarify, I use client/server topology(actually wan, but that is not important in here). My client is a spring boot web app which has kendo grid as ui. Users can filter/sort on any combination of the columns, which will be passed to the spring boot app for generating dynamic OQL and create the pagination. Till now, except for being dynamic, my OQL queries are quite straight forward. I do not want to introduce server side functions due to the complexity of our global deployment process. But I can if you think that is something I have to do.
Again, thanks for your answers.

Parallel Tasks in Data Factory Custom Activity (ADF V2)

I am running Custom code activity in ADF v2 using Batch Service. Whenever this runs it only create one CloudTask within my Batch Job although I have more than two dozen parallel.Invoke methods running. Is there a way I can create multiple Tasks from one Custom Activity from ADF so that the processing can spread across all nodes in Batch Pool
I have fixed Pool with two nodes. Max Tasks are also set to 8 per node and Scheduling policy is also set to "Spread". I have only one Custom Task on my pipeline with Multiple Parallel.Invoke (Almost two Dozen).I was hoping this will create Multiple CloudTasks and will be spread Across both of my nodes as both nodes are single core. Looks like when each Custom Activity runs in ADF, it creates only one Task (CloudTask) for Batch Service.
My other hope was to use
https://learn.microsoft.com/en-us/azure/batch/tutorial-parallel-dotnet
and manually create CloudTasks in my console application and create Multiple Tasks Programatically and then run that Console Application with ADF Custom Activity but CloudTask takes JobId and Cmd. Wanted to something like following but instead of passing taskCommandLine, I wanted to pass a C# method name and parameters to execute
string taskId = "task" + i.ToString().PadLeft(3, '0');
string taskCommandLine = "ping -n " + rand.Next(minPings, maxPings +
1).ToString() + " localhost";
CloudTask task = new CloudTask(taskId, taskCommandLine);
// Wanted to do CloudTask task = new CloudTask(taskId,
SomeMethod(args));
tasks.Add(task);
Also it looks like we can't create CloudTasks by using .NET API for Batch within Custom Activity of ADF
What I wanted to Achieve?
I have data in SQL Server table and I want to run different transformations on it by slicing it Horizontally or Vertically (by picking rows or columns). I want to run those transformations in Parallel (wants to have multiple CloudTask instances so that each one can operate on a specific Column Independently and after transformation load it
into a different table). But the issue is it looks like we can't use .NET Batch Service API within ADF and the only way seems to be having multiple Custom Activities in my Data Factory pipeline.
Application needs to deployed on each and every node within Batch pool and CloudTasks needs to be created by calling the application with cmd
CloudTask task =
new CloudTask(
"MyTask",
"cmd /c %AZ_BATCH_APP_PACKAGE_MyTask%\\myTask.exe -args -here");

Wait.on(signals) use in Apache Beam

Is it possible to write to 2nd BigQuery table after writing to 1st has finished in a batch pipeline using Wait.on() method(new feature in Apache Beam 2.4)? The example given in the Apache Beam documentation is:
PCollection<Void> firstWriteResults = data.apply(ParDo.of(...write to first database...));
data.apply(Wait.on(firstWriteResults))
// Windows of this intermediate PCollection will be processed no earlier than when
// the respective window of firstWriteResults closes.
.apply(ParDo.of(...write to second database...));
But why would I write to database from within ParDo? Can we not do the same by using the I/O transforms given in Dataflow?
Thanks.
Yes this is possible, although there are some known limitations and there is currently some work being done to further support this.
In order to make this work you can do something like the following:
WriteResult writeResult = data.apply(BigQueryIO.write()
...
.withMethod(BigQueryIO.Write.Method.STREAMING_INSERTS)
);
data.apply(Wait.on(writeResults.getFailedInserts()))
.apply(...some transform which writes to second database...);
It should be noted that this only works with streaming inserts and wont work with file loads. At the same time there is some work being done currently to better support this use case that you can follow here
Helpful references:
http://moi.vonos.net/cloud/beam-send-pubsub/
http://osdir.com/apache-beam-users/msg02120.html

How to transfer PCollection to a normal List

I have a PCollection as a result of a pipeline after doing Bigquery processing, now I want to use some part of that data separate from the pipeline. How do I transfer a PCollection to a List so that I can iterate through it and use the content.
Am I doing something wrong conceptually ?
Once you are done with data processing inside your Dataflow pipeline, you'd likely want to write the data into a persistent storage, such as files in Cloud Storage (GCS), a table in BigQuery, etc.
You can then consume the data outside Dataflow, for example, to read it into a List. Obviously, it would need to fit into memory for that specific action.
What I would do is creating "side outputs" (https://cloud.google.com/dataflow/model/par-do) that is another PCollection that you create together with your main process so in the end you will have 2 PCollections as result of your BQ process.
Just ensure that on your process function you create a condition to add elements to side output collection. Something like this:
public final void processElement(final ProcessContext context) throws Exception {
context.output(bqProcessResult);
if (condition) {
context.sideOutput(myFilterTag, bqProcessResult);
}
}
The result of that process is not a PCollection but a PCollectionTuple so you just have to do the following:
PCollectionTuple myTuples = previous process using the function above...;
PCollection<MyType> bqCollection = myTuples.get(bqTag);
PCollection<MyType> filteredCollection = myTuples.get(myFilterTag);

Categories