Vulkan - How to enable Synchronization 2 feature? - vulkan

I am trying to use Vulkan Synchronization2 feature. However, even though I enable it (correctly as far as I can tell), the validation layer reports an error when trying to use vkCmdPipelineBarrier2:
Validation Error: [ VUID-vkCmdPipelineBarrier2-synchronization2-03848 ] Object 0: handle = 0x258b5ee3dc0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; | MessageID = 0xa060404 | vkCmdPipelineBarrier2(): Synchronization2 feature is not enabled The Vulkan spec states: The synchronization2 feature must be enabled (https://vulkan.lunarg.com/doc/view/1.3.211.0/windows/1.3-extensions/vkspec.html#VUID-vkCmdPipelineBarrier2-synchronization2-03848)
Here is my instance creation code:
DEBUGGER_TRACE("Requested instance layers = {}", desc.required_layers);
DEBUGGER_TRACE("Requested instance extenstions = {}", desc.required_instance_extentions);
vk::InstanceCreateInfo createInfo{
.flags = vk::InstanceCreateFlags(),
.pApplicationInfo = &appInfo,
.enabledLayerCount = static_cast<uint32_t>(desc.required_layers.size()),
.ppEnabledLayerNames = desc.required_layers.data(), // enabled layers
.enabledExtensionCount = static_cast<uint32_t>(desc.required_instance_extentions.size()),
.ppEnabledExtensionNames = desc.required_instance_extentions.data()
};
return std::make_pair(vkr::Instance(context, createInfo), version);
And my device creation code:
DEBUGGER_TRACE("Requested device layers = {}", desc.required_layers);
DEBUGGER_TRACE("Requested device extentions = {}", desc.required_extentions);
vk::DeviceCreateInfo deviceInfo{
.pNext = &features,
.flags = vk::DeviceCreateFlags(),
.queueCreateInfoCount = static_cast<uint32_t>(deviceCreationDesc.create_info.size()),
.pQueueCreateInfos = deviceCreationDesc.create_info.data(),
.enabledLayerCount = static_cast<uint32_t>(desc.required_layers.size()),
.ppEnabledLayerNames = desc.required_layers.data(),
.enabledExtensionCount = static_cast<uint32_t>(desc.required_extentions.size()),
.ppEnabledExtensionNames = desc.required_extentions.data(),
.pEnabledFeatures = &deviceFeatures
};
_device = std::make_shared<vkr::Device>(*chosenPhysicalDevice, deviceInfo);
My debug traces print the following:
Requested instance layers =
{VK_LAYER_KHRONOS_validation,VK_LAYER_KHRONOS_synchronization2}
Requested instance extensions =
{VK_KHR_surface,VK_KHR_win32_surface,VK_EXT_debug_utils}
Requested device layers =
{VK_LAYER_KHRONOS_validation,VK_LAYER_KHRONOS_synchronization2}
Requested device extentions =
{VK_KHR_swapchain,VK_KHR_synchronization2}
When using the Vulkan Configurator, I can Force Enable synchronization 2 and everything works just fine.
So what could be wrong? How to enable synchronization 2?

Synchronization 2 is a set of functionality exposed in two ways. It was originally introduced as a KHR extension to Vulkan 1.0. It was adopted as a core feature for Vulkan 1.3. And while these are largely the same (I have no idea if they're actually identical)... they're still exposed in two separate ways, with two different sets of named functions.
You requested the extension, but you tried to use the core feature... which you did not request. You need to be consistent: if you want to code against the core feature, you must ask for it as a feature, not an extension.

Related

Receive video from a source, preprocess and stream live preview to the client | ASP.NET Core

I need to implement a server that gets video from some source for example IPCamera
then preprocess the image and streams it down to the client (if requested).
I already implemented the part with processing, it accepts a single frame (bitmap) and returns the processed bitmap. What I'm struggling with is the part of receiving video from the camera and then streaming it to the client.
What would be the right way to do it?
What libraries do you recommend using?
I use ASP.NET Core for the Server part, and Angular/React for the Client.
I tried to implement gRPC but a gRPC-Web client for typescript seems to be a pain in the ass.
Edit: 02.08.2022
What I achieved so far:
I figured out how to receive image output from the camera.
I found out RTSP Client for C#. Source: C# RTSP Client for .NET
It works pretty fine. I can receive output with small to no delay, and I use my phone to emulate the RTSP camera/server.
So RTSP Client receives raw frames (in my case H.264 IFrame/PFrame). The problem is I need to decode those frames preferably to Bitmap because I use YoloV4 ONXX Model for object detection.
Here's how I set up YoloV4 with ML.Net. Source: Machine Learning with ML.NET – Object detection with YOLO
To decode raw frames I use FFMpeg (sadly I didn't find any working FFMpeg package that would work with .NET Core, I tried AForge.Net, Accord but in both packages, the FFMPEG namespace is missing after installing for some reason, so I dug through Github and took this project FrameDecoderCore). It's not the best solution but it works. Now I can receive the output and decode it to Bitmap.
Now I'm facing three major issues:
How to detect objects without delaying the process of receiving camera output. And how to properly build an onnx model just to predict without training.
How to convert processed bitmaps back to a video stream. I also need to be able to save part of it as a video file on disk (video format doesn't matter) whenever the desired object was detected.
How to stream processed or unprocessed output to the client when the client wants to see the camera output. - I'm thinking of gRPC here and sending bitmaps and then displaying it on HTML Canvas.
Here's how my service looks at the moment:
public class CCTVService : BackgroundService
{
private readonly RtspClient _rtspClient;
private readonly ILogger<CCTVService> _logger;
private const int streamWidth = 480;
private const int streamHeight = 640;
private static readonly FrameDecoder FrameDecoder = new FrameDecoder();
private static readonly FrameTransformer FrameTransformer = new FrameTransformer(streamWidth, streamHeight);
public CCTVService(ILogger<CCTVService> logger)
{
_logger = logger;
_rtspClient = new RtspClient(new ConnectionParameters(new Uri("rtsp://192.168.0.99:5540/ch0")));
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
using (_rtspClient)
{
try
{
await _rtspClient.ConnectAsync(stoppingToken);
_logger.LogInformation("Connecting to RTSP");
}
catch(RtspClientException clientException)
{
_logger.LogError(clientException.Message);
//throw;
}
_rtspClient.FrameReceived += (obj, rawFrame) =>
{
if (rawFrame is not RawVideoFrame rawVideoFrame)
return;
var decodedFrame = FrameDecoder.TryDecode(rawVideoFrame);
if (decodedFrame == null)
return;
using var bitmap = FrameTransformer.TransformToBitmap(decodedFrame);
_logger.LogInformation($"Timestamp: {new DateTimeOffset(rawFrame.Timestamp).ToUnixTimeSeconds()} Timestamp-diff: {new DateTimeOffset(DateTime.Now).ToUnixTimeSeconds() - new DateTimeOffset(rawFrame.Timestamp).ToUnixTimeSeconds()}");
// save bitmaps | Test
//var t = new Thread(() =>
//{
// using var bitmap = FrameTransformer.TransformToBitmap(decodedFrame);
// var name = "./test/" + new DateTimeOffset(rawFrame.Timestamp).ToUnixTimeMilliseconds().ToString() + " - " + new Random().NextInt64().ToString() + ".bmp";
// bitmap.Save(name);
//});
//t.Priority = ThreadPriority.Highest;
//t.Start();
};
try
{
await _rtspClient.ReceiveAsync(stoppingToken);
}
catch
{
// swallow
}
}
}
}
So I can't really help with part 2 and 3 of your question but with ML.NET, one of the things you might consider is batching the predictions. Instead of preprocessing them one at a time, you could collect 10-20 frames and then instead of using PredictionEngine, use the Transform passing it in an IDataView instead of a Bitmap.
Here are some samples of using ONNX models inside applications. The WPF sample might be of interest since it uses a webcam to capture inputs. I believe it uses the native Windows APIs, so different than how you'd do it for web but it might be worth looking at anyway.
https://github.com/dotnet/machinelearning-samples/tree/main/samples/csharp/end-to-end-apps/ObjectDetection-Onnx

UWP SyncFusion SfDataGrid Serialization Exception

I'm trying to make use of the SfDataGrid component in my UWP app and have everything working just fine in debug mode. When I switched over to release mode to regression test the app before publishing to the Windows store the app throws an exception during grid serialization.
I have an SfDataGrid defined with 4 text columns, 1 numeric column and 1 template column. The template column just includes a delete button so that the user to can remove the row.
I have a method to return the serialization options as follows:
private SerializationOptions GetGridSerializationOptions()
{
return new SerializationOptions
{
SerializeFiltering = false,
SerializeColumns = true,
SerializeGrouping = true,
SerializeSorting = true,
SerializeTableSummaries = true,
SerializeCaptionSummary = true,
SerializeGroupSummaries = true,
SerializeStackedHeaders = true
};
}
Then I have another method to serialize the grid settings as follows:
private void RetrieveDefaultGridSettings()
{
using (MemoryStream ms = new MemoryStream())
{
gridReport.Serialize(ms, GetGridSerializationOptions());
_defaultGridSettings = Convert.ToBase64String(ms.ToArray());
}
}
I've followed the SyncFusion documentation (https://help.syncfusion.com/uwp/datagrid/serialization-and-deserialization) which describes how to serialize template columns. I have everything working perfectly in debug mode, but when I switch to release mode I get an exception on this line:
gridReport.Serialize(ms, GetGridSerializationOptions());
The exception is:
System.Runtime.Serialization.InvalidDataContractException: 'KnownTypeAttribute attribute on type 'Syncfusion.UI.Xaml.Grid.SerializableGridColumn' specifies a method named 'KnownTypes' to provide known types. Static method 'KnownTypes()' was not found on this type. Ensure that the method exists and is marked as static.'
I've had a look at the SerializableGridColumn class and can see a public static method called KnownTypes so I don't really understand why this exception is happening. I'm even more confused about why it's only happening in release mode.
In attempt to fix the problem I have tried referencing the entire SDK, removing the SDK and referencing the specific assemblies (Syncfusion.SfGrid.UWP, Syncfusion.Data.UWP, Syncfusion.SfInput.UWP, Syncfusion.SfShared.UWP, Syncfusion.SfGridConverter.UWP, Syncfusion.XlsIO.UWP and Syncfusion.Pdf.UWP) but neither yields a different result and the exception still occurs, but only in release mode.
Switching off the setting "Compile with .NET Native tool chain" does resolve the problem, but is not a practical solution as this blocks me from publishing the app to the Windows store.
Thanks very much for any assistance anyone can provide.
After exhausting all possible problems with my own code, I logged with an issue with SyncFusion. They're investigating and will hopefully provide a fix soon.

How can I embed extra column/external URLs in Flask-appbuilder list/detail model view?

Flask-appbuilder's ModelView can display list and detail for a model. Very handy and save many times for CURD operations.
Sometimes the application demands more features with extra column(s) besides CURD operations. For example, in a IoT related Device ModelView, besides CRUD, I want to link to anther realtime gauge web page, or call Web API offered by device server to send command to device.
In other Python framework, like Tornado/Cyclone, I will manually designed a template page (with extra buttons) and (embed extra) javascript code. But I am still not familiar with FAB's structure.
I can make these extra operations as external links to other exposed methods. And add these links to models as data fields. But I think these design is quite ugly. And its URL is too long to display as well.
Any better ideas? Which methods should be overriden?
I found a solution from FAB's issues site on Github. In models.py, you can define a method, then use the method in views.py. Then the resource list page will treat the method as addtional column. This solution has a drawback, you have to write HTML in a model method.
Here is my code.
models.py
class Device(Model):
id = Column(Integer, primary_key = True)
snr = Column(String(256), unique = True)
name = Column(String(128))
addr = Column(String(256))
latitude = Column(Float)
longitude = Column(Float)
status = Column(Enum('init','normal','transfer','suspend'), default = 'init')
owner_id = Column(Integer, ForeignKey('account.id'))
owner = relationship("Account")
def __repr__(self):
return self.name
def get_gauge_url(self):
btn = "<i class=\"fa fa-dashboard\">".format(self.id)
return btn
views.py
class DeviceView(ModelView):
datamodel = SQLAInterface(Device)
related_views = [PermitView, EventView]
label_columns = {'snr':'SNR',
'owner_id':'Owner',
'get_gauge_url':'Gauge'}
list_columns = ['name','snr','addr','owner','get_gauge_url']
edit_columns = ['name','snr','owner','addr','latitude','longitude','status',]
add_columns = edit_columns
show_fieldsets = [
('Summary',
{'fields':['name','snr','owner']}
),
('Device Info',
{'fields':['addr','latitude','longitude','status'],'expanded':True}
),
]

Is it possible to programmatically add a new Mule Flow after the context has been initialized?

I would like to programmatically add new RSS Connector flows while Mule is running (after the context has been initialized). When I try to do this, I get a Lifecycle Exception saying that the context is already initialized.
Is there a way to do this without restarting the whole context?
I figured out a solution on my own. It turned out that creating a new Mule context, adding my flow, and then starting the context worked just fine. In fact, this ended up being simpler, faster, and cleaner than the other path I was going down.
Creating a default Mule context worked just fine for me. You might need to add a ConfigurationBuilder to yours if you have special needs.
MuleContext newMuleContext = new DefaultMuleContextFactory().createMuleContext();
MuleRegistry registry = newMuleContext.getRegistry();
Flow flow = createFlow();
registry.registerFlowConstruct(flow);
newMuleContext.start();
Edit. Here's the createFlow method. Your specifics will be different based on the needs of your app.
protected Flow createFlow(MuleContext context, RssBean feed) throws Exception {
MuleRegistry registry = context.getRegistry();
String feedName = feed.getName();
HttpPollingConnector connector = getHttpPollingConnector(context, registry, feedName);
EndpointURIEndpointBuilder endpointBuilder = getEndpointBuilder(context, feed, registry, shortName, connector);
registry.registerEndpointBuilder(feedName + ".in", endpointBuilder);
MessagePropertiesTransformer transformer = getTransformer(context, feedName);
MessageProcessor mp = getOutboundFlowRef(context);
Flow flow = getFlow(context, shortName, endpointBuilder, transformer, mp);
registry.registerFlowConstruct(flow);
return flow;
}

SharpDX: Enabling BlendState

I'm trying to get the "SharpDX.Direct3D11.DeviceContext.OutputMerger.Blendstate" to work. Without this, i have nice scene (Polygones with textures on it, for a Spaceshooter). I've done OpenGL Graphics in the past three years, so i thought it might be as simple as in OpenGL - just enable Blending and set the right Dst/Src Modes. But if i set a new BlendStateDescription, all Output is Black, even if "RenderTarget[x].IsBlendEnabled" is set to "false".
I searched for a tutorial or something, and find one - but it uses effects. So my question is simple - do i have to use technique's and effects in SharpDX? No other way left for simple blending?
This is what i've done:
mBackBuffer = Texture2D.FromSwapChain<Texture2D>(mSwapChain, 0);
mRenderView = new RenderTargetView(mDevice, mBackBuffer);
mContext.OutputMerger.SetTargets(mDepthStencilView, mRenderView);
mContext.OutputMerger.SetBlendState(new BlendState(mDevice, new BlendStateDescription()), new SharpDX.Color4(1.0f), -1);
mContext.OutputMerger.BlendState.Description.RenderTarget[0].IsBlendEnabled = true;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].SourceBlend = BlendOption.SourceAlpha;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].DestinationBlend = BlendOption.InverseSourceAlpha;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].BlendOperation = BlendOperation.Add;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].SourceAlphaBlend = BlendOption.One;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].DestinationAlphaBlend = BlendOption.Zero;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
mContext.OutputMerger.BlendState.Description.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
And, even if just simply do:
mContext.OutputMerger.SetBlendState(new BlendState(mDevice, new BlendStateDescription()), new SharpDX.Color4(1.0f), -1);
mContext.OutputMerger.BlendState.Description.RenderTarget[0].IsBlendEnabled = false;
ouput is all black.. maybe i just have to change something in the pixel shaders?
All resource in Direct3D11 are immutable, so when you are creating the new Blendstate(mDevice, new BlendStateDescription()), you cannot change the description later.
The normal workflow is:
var blendDescription = new BlendDescription();
blendDescription.RenderTarget[0].IsBlendEnabled = .. // set all values
[...]
var blendState = new BlendState(device, blendDescription);
context.OutputMerger.SetBlendState(blendState, ...);
Also resource objects need to be stored somewhere and disposed when you are completely done with them (most of the time for blendstates, at the end of your application), otherwise you will get memory leaks.
I advice you to look more closely at some Direct3D11 C++ samples when you are not sure about the API usage. Also, I recommend you to read a book like "Introduction to 3D Game Programming with DirectX 11" by Frank.D.Luna which is perfect to begin and learn the Direct3D11 API.