When is it useful to have broadcast data in deserialized form? - serialization

Reading the docs for Spark I see
The data broadcasted this way is cached in serialized form and deserialized before running each task. This means that explicitly creating broadcast variables is only useful when tasks across multiple stages need the same data or when caching the data in deserialized form is important.
I understand why broadcasts variables are useful when re-using them in multiple tasks. You don't want to re-send them with all closures.
However the second part, in bold, says when caching data in deserialized form is important. When and why would that be important? If you're only going to use data in 1 task it will still get serialized/deserialized once, no?

I think you ignored following part:
and deserialized before running each task.
A single stage typically consist of multiple tasks (it is not common to have only a single partition, is it?) and multiple tasks belonging to the same stage can be processed by the same executor. Since deserialization can be quite expensive you may prefer to perform it only once.

Related

Concurrent use of VkSamplers?

So a VkSampler is created with a VkSamplerCreateInfo that just has a bunch of configuration settings, that as far as I can see would just define a pure function of some input image.
They are described as:
VkSampler objects represent the state of an image sampler which is used by the implementation to
read image data and apply filtering and other transformations for the shader.
One use (possibly only use) of VkSampler is to write them to descriptors (such as VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER) for use in descriptor sets that are bound to pipelines/shaders.
My question is: can you write the same VkSampler to multiple different descriptors? from the same or multiple different descriptor pools? even if one of the current descriptors is in use in some currently executing render pass?
Can you use the same VkSampler concurrently from multiple different render passes / subpasses / pipelines?
Put another way, are VkSamplers stateless? or do they represent some stateful memory on the device and so you shouldn't use the same one concurrently?
VkSampler objects definitely have data associated with them, so it would be wrong to call them "stateless". What they are is immutable. Like VkRenderPass, VkPipeline, and similar objects, once they are created, their contents cannot be changed.
Synchronization between accesses is (generally) needed only for cases when one of the accesses is a modification operation. Since VkSamplers are immutable, there are no modification operations. So synchronization is not needed for cases where you're accessing a VkSampler from different threads, commands, or whathaveyou.
The only exception is the obvious one: vkDestroySampler, which requires that submitted commands that use the sampler have completed before calling the function.

When to use multiple MTLRenderCommandEncoders to perform my Metal rendering?

I'm learning Metal, and there's a conceptual question that I'm trying to wrap my head around: at what level, exactly, should my code handle successive drawing operations that require different pipeline states? As I understand it (from answers like this: https://stackoverflow.com/a/43827775/2752221), I can use a single MTLRenderCommandEncoder and change its pipeline state, the vertex buffer it's using, etc., between calls to drawPrimitives:, and the encoder state that was current at the time of each call to drawPrimitives: will be preserved. So that's great. But it also seems like the design of Metal is such that one can make multiple MTLRenderCommandEncoder instances, and use them to sequentially throw batches of commands into a MTLCommandBuffer. Given that the former works – using one MTLRenderCommandEncoder and changing its state – why would one do the latter? Under what circumstances is it correct to do the former, and under what circumstances is it necessary to do the latter? What is an example of a situation where the latter would be necessary/appropriate?
If it matters, I'm working on a macOS app, using Objective-C. Thanks.
Ignoring multithreaded encoding cases, which are somewhat advanced, the main reason you'd want to create multiple render command encoders during a frame is because you need to change which textures you're rendering to.
You'll notice that you need to provide a render pass descriptor when creating a render command encoder. For this reason, we often say that the sequence of commands belonging to a particular encoder constitute a render pass. The attachments of that descriptor refer to the textures that will be written to by the commands encoded by the encoder.
Many different techniques, including shadow mapping and postprocessing effects like bloom require multiple passes to produce. Since you can't change attachments in the midst of a pass, creating a new encoder is the only way to encode multiple passes in a frame.
Relatedly, you should ordinarily use one command buffer per frame. You can, however, sometimes reduce frame time by splitting your passes across multiple command buffers, but this is highly dependent on the shape of your workload and should only be done in tandem with profiling, as it's not always an optimization.
In addition to Warren's answer, another way to look at the question is by examining the API. A number of Metal objects are created from descriptors. The properties of the descriptor at the time an object is created from it govern that object for its lifetime. Those are aspects of the object that can't be changed after creation.
By contrast, the object will have various setter methods to modify other properties over its lifetime.
For a render command encoder, the properties that are fixed for its lifetime are those specified by the MTLRenderPassDescriptor used to create it. If you want to render with different values for any of those properties, the only way to do so is to create a new encoder from a different descriptor. On the other hand, if you can do everything you need/want to do by using the encoder's setter methods, then you don't need a new encoder.

How should I organise a pile of singly used functions?

I am writing a C++ OpenCV-based computer vision program. The basic idea of the program could be described as follows:
Read an image from a camera.
Do some magic to the image.
Display the transformed image.
The implementation of the core logic of the program (step 2) falls into a sequential calling of OpenCV functions for image processing. It is something about 50 function calls. Some temporary image objects are created to store intermediate results, but, apart from that, no additional entities are created. The functions from step 2 are used only once.
I am confused about organising this type of code (which feels more like a script). I used to create several classes for each logical step of the image processing. Say, here I could create 3 classes like ImagePreprocessor, ImageProcessor, and ImagePostprocessor, and split the abovementioned 50 OpenCV calls and temorary images correspondingly between them. But it doesn't feel like a resonable OOP design. The classes would be nothing more than a way to store the function calls.
The main() function would still just create a single object of each class and call thier methods consequently:
image_preprocessor.do_magic(img);
image_processor.do_magic(img);
image_postprocessor.do_magic(img);
Which is, to my impression, essentially the same thing as callling 50 OpenCV functions one by one.
I start to question whether this type of code requiers an OOP design at all. After all, I can simply provide a function do_magic(), or three functions preprocess(), process(), and postprocess(). But, this approach doesn't feel like a good practice as well: it is still just a pile of function calls, separated into a different function.
I wonder, are there some common practices to organise this script-like kind of code? And what would be the way if this code is a part of a large OOP system?
Usually, in Image Processing, you have a pipeline of various Image Processing Modules. Same is applicable on Video Processing, where each Image is processed according to its timestamp order in the video.
Constraints to consider before designing such pipeline:
Order of Execution of these modules is not always same. Thus, the pipeline should be easily configurable.
All modules of the pipeline should be executable in parallel with each other.
Each module of the pipeline may also have a multithreaded operation. (Out of scope of this answer, but is a good idea when a single module becomes the bottleneck for the pipeline).
Each module should easily adhere to the design and have the flexibility of internal implementation changes without affecting other modules.
The benefit of preprocessing of a frame by one module should be available to later modules.
Proposed Design.
Video Pipeline
A video pipeline is a collection of modules. For now, assume module is a class whose process method is called with some data. How each module can be executed will depend on how such modules are stored in VideoPipeline! To further explain, see below two categories:-
Here, let’s say we have modules A, B, and C which always execute in same order. We will discuss the solution with a video of Frame 1, 2 and 3.
a. Linked List: In a single-threaded application, frame 1 is first executed by A, then B and then C. The process is repeated for next frame and so on. So linked list seems like an excellent choice for the single threaded application.
For a multi-threaded application, speed is what matters. So, of course, you would want all your modules running 128-core machine. This is where Pipeline class comes into play. If each Pipeline object runs in a separate thread, the whole application which may have 10 or 20 modules starts running multithreaded. Note that the single-thread/multithread approach can be made configurable
b. Directed Acyclic Graph: Above-linked list implementation can be further improved when you have high processing power and want to reduce the lag between input and response time of pipeline. Such a case is when module C does not depend on B, but on A. In such case, any frame can be parallelly processed by module B and module C using a DAG based implementation. However, I wouldn’t recommend this as the benefits are not so great compared to the increased complexity, as further management of output from module B and C needs to be done by say module D where D depends on B or C or both. The number of scenarios increases.
Thus, for simplicity sake, let’s use LinkedList based design.
Pipeline
Create a linked list of PipelineElement.
Make process method of pipeline call process method of the first element.
PipelineElement
First, the PipelineElement processes the information by calling its ImageProcessor(read below). The PipelineElement will pass a Packet(of all data, read below) to ImageProcessor and receives the updated packet.
If next element is not null, call next PipelineElement process and pass updated packet.
If next element of a PipelineElement is null, stop. This element is special as it has an Observer object. Other PipelineElement will be set to null for Observer field.
FrameReader(VIdeoReader/ImageReader)
For video/image reader, create an abstract class. Whether you process video or image or multiple, processing is done one frame at a time, so create an abstract class(interface) ImageProcessor.
A FrameReader object stores reference to the pipeline.
For each frame, it pushes the information in by calling process method of Pipeline.
ImageProcessor
There is no Pre and Post ImageProcessor. For example, retinex processing is used as Post Processing but some application can use it as PreProcessing. Retinex processing class will implement ImageProcessor. Each element will hold Its ImageProcessor and Next PipeLineElement object.
Observer
A special class which extends PipelineElement and provides a meaningful output using GUI or disk.
Multithreading
1. Make each method run in its thread.
2. Each thread will poll messages from a BlockingQueue( of small size like 2-3 Frames) to act as a buffer between two PipelineElements. Note: The queue helps in averaging the speed of each module. Thus, small jitters(a module taking too long time for a frame) does not affect video output rate and provides smooth playback.
Packet
A packet will store all the information such as input or Configuration class object. This way you can store intermediate calculations as well as observe a real-time effect of changing configuration of an algorithm using a Configuration Manager.
To conclude, each element can now process in parallel. The first element will process nth frame, the second element will process n-1th frame, and soon, but with this, a lot more issues such as pipeline bottlenecks and additional delays due to less core power available to each element will pop up.
This structure lends itself to the pipes and filters architecture (see Pattern-Oriented Software Architecture Volume 1: A System of Patterns by Frank Buschmann):
The Pipes and Filters architectural pattern provides a structure for
systems that process a stream of data. Each processing step is
encapsulated in a filter component. Data is passed through pipes
between adjacent filters. Recombining filters allows you to build
families of related systems.
See also this short description (with images) from the Enterprise Integration Patterns book.

Difference of principles behing CopyOnWriteArrayList and ConcurrentHashMap

In the advanced java collections API, we have CopyOnWriteArrayList and ConcurrentHashMap. yet the underlying principles on these data structures are different. i.e ConcurrentHashMap only locks a segment of the Map on which the write operation is happening. This is how it prevents synchronization problems without affecting performance.
CopyOnWriteArrayList on the other hand prevents concurrency problems by making a duplicate of the original List. Why are these implementations so different? is Java just testing to see which one works better?
A concurrent data structure is designed to ensure that any sequence of individual operations on the data structure will always leave it in a state which is consistent from its own point of view, but a "snapshot" formed by reading the data structure piece by piece may not necessarily represent any state which the data structure ever held. For example, if while one is reading out an collection of users, "Zachary" is renamed to "Adam", the renamed user might get read out as "Adam", "Zachary", both, or neither. Even if during the enumeration the collection was never in a state where the user never existed with both names, or didn't exist with either, the enumeration might make it look like it did.
A copy-on-write collection is designed to let one take a snapshot of the collection's entire state and guarantee that there was some moment in time when the collection actually had that state. The result of every action, including snapshot requests, should be consistent with each action having been performed at some discrete moment in time time between when the request was issued and when it was reported to have completed. If two requests are given before either completes, the selection of which action precedes the other is arbitrary, but there must be a globally-consistent ordering.

Parallelizing L2S Entity Retrieval

Assuming a typical domain entity approach with SQL Server and a dbml/L2S DAL with a logic layer on top of that:
In situations where lazy loading is not an option, I have settled on a convention where getting a list of entities does not also get each item's child entities (no loading), but getting a single entity does (eager loading).
Since getting a single entity also gets children, it causes a cascading effect in which each child then gets its children too. This sounds bad, but as long as the model is not too deep, I usually don't see performance problems that outweigh the benefits of the ease of use.
So if I want to get a list in which each of the items is fully hydrated with children, I combine the GetList and GetItem methods. So I'll get a list and then loop through it getting each item with the full cascade. Even this is generally acceptable in many of the projects I've worked on - but I have recently encountered situations with larger models and/or more data in which it needs to be more efficient.
I've found that partitioning the loop and executing it on multiple threads yields excellent results. In my first experiment with a list of 50 items from one particular project, I did 5 threads of 10 items each and got a 3X improvement in time.
Of course, the mileage will vary depending on the project but all else being equal this is clearly a big opportunity. However, before I go further, I was wondering what others have done that have already been through this. What are some good approaches to parallelizing this type of thing?
Usually it is faster to make a single database call that returns a set of records.
This recordset can "hydrate" the top-level objects, then another recordset can load child objects. I'm not sure how your situation does not allow lazy-loading, but this method is essentially lazy-loading, and will surely be faster than making multiple calls to the database that returns a single record each time.
You could make asynchronous calls to the database so that multiple queries are running in parallel. If you combine this with the first strategy for each "layer" of the model, and write a somewhat more complex hydration function based on multiple-record return sets, you should see that the database handles concurrent connections very well (which is why you see a performance gain from using multiple threads).
But you don't need to explicitly create threads - check out the asynchronous methods of a SqlCommand.