'Double' missing from VertexAttribPointerType enum in OpenTK 1.0? - mono

I'm trying to specify the type of my GL.VertexAttribPointer(...) argument as GL_DOUBLE. This should be valid according to the documentation for this OpenTK function for ES20 (link).
However, the VertexAttribPointerType enum seems to be missing the Double type for OpenTK-1.0. In other words, the following line:
GL.VertexAttribPointer(ATTRIBUTE_COORD2D, 3, VertexAttribPointerType.Double, false, 0, quadVertices);
..fails to compile since the VertexAttribPointerType only provides the definitions for the following:
using System;
namespace OpenTK.Graphics.ES20
{
public enum VertexAttribPointerType
{
Byte = 5120,
UnsignedByte,
Short,
UnsignedShort,
Float = 5126,
Fixed = 5132
}
}
Is there a work around for this issue? How else are you supposed to specify a double[] of vertices for the vertex shader?

The OpenGL ES 2.0 manual page for glVertexAttribPointer says:
GL_BYTE, GL_UNSIGNED_BYTE, GL_SHORT, GL_UNSIGNED_SHORT, GL_FIXED, or
GL_FLOAT are accepted
So the reason for OpenTK not having double is that the underlying framework doesn't seem to support it either. The OpenTK documentation may be suffering from copy-paste error.

Related

vkCreateShaderModule doesn't fail, even when pCode doesn't point to valid SPIR-V code

By the documentation, the pCode field of the VkShaderModuleCreateInfo struct
must point to valid SPIR-V code, formatted and packed as described by the Khronos SPIR-V Specification.
Now, I've made a typo in the call of the following utility function and unintendedly provided the file name of the GLSL code as the shader_file_name.
void create_shader_module(VkDevice device, std::string const& shader_file_name)
{
std::ifstream shader_file(shader_file_name, std::ios::binary);
shader_file.seekg(0, std::ios_base::end);
std::size_t const shader_file_size = shader_file.tellg();
if (shader_file_size > 0)
{
assert(shader_file_size % sizeof(std::uint32_t) == 0);
std::vector<char> binary(shader_file_size);
shader_file.seekg(0, std::ios_base::beg);
shader_file.read(binary.data(), shader_file_size);
VkShaderModuleCreateInfo shader_module_create_info{};
shader_module_create_info.sType = VK_STRUCTURE_TYPE_SHADER_MODULE_CREATE_INFO;
shader_module_create_info.codeSize = shader_file_size;
shader_module_create_info.pCode = reinterpret_cast<std::uint32_t const*>(binary.data());
VkShaderModule shader_module;
if (vkCreateShaderModule(device, &shader_module_create_info, nullptr, &shader_module) != VK_SUCCESS)
throw std::exception("Could not create shader module");
}
}
Despite the typo, the code didn't throw, i.e. vkCreateShaderModule returned VK_SUCCESS. Why?
(Note that a subsequent call to vkCreateGraphicsPipelines with a VkPipelineShaderStageCreateInfo which uses the generated shader module fails.)
The validation layers would have found this problem, emitting a message:
SPIR-V module not valid: Invalid SPIR-V magic number.
Ths validation layers run the SPIR-V validator at vkCreateShaderModule time.
You are using Vulkan, not OpenGL. In Vulkan, it is not up to the implementation to validate your SPIR-V code. The Valid Usage for vkCreateShaderModule says that "pCode must point to valid SPIR-V code, formatted and packed as described by the
Khronos SPIR-V Specification." As with any other Valid Usage statement, if you violate it, the implementation will not tell you that you have done so.
You simply get undefined behavior.

Dataflow returns correct type locally, but not when executed in the cloud

Given the following table in BigQuery:
With the following 5 values:
And a simple ParDo which reads it, and prints the type:
import com.google.api.services.bigquery.model.TableRow;
import com.google.cloud.dataflow.sdk.Pipeline;
import com.google.cloud.dataflow.sdk.io.BigQueryIO;
import com.google.cloud.dataflow.sdk.options.DataflowPipelineOptions;
import com.google.cloud.dataflow.sdk.options.DataflowPipelineWorkerPoolOptions;
import com.google.cloud.dataflow.sdk.options.PipelineOptionsFactory;
import com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner;
import com.google.cloud.dataflow.sdk.transforms.DoFn;
import com.google.cloud.dataflow.sdk.transforms.ParDo;
public class FloatBug {
public static void main(String[] args) {
DataflowPipelineOptions options = PipelineOptionsFactory.create().as(DataflowPipelineOptions.class);
options.setRunner(BlockingDataflowPipelineRunner.class);
options.setProject("<project_id>");
options.setWorkerMachineType("n1-standard-1");
options.setZone("us-central1-a");
options.setStagingLocation("<gcs_bucket>");
options.setNumWorkers(1);
options.setMaxNumWorkers(1);
options.setAutoscalingAlgorithm(DataflowPipelineWorkerPoolOptions.AutoscalingAlgorithmType.NONE);
Pipeline pipeline = Pipeline.create(options);
pipeline.apply(BigQueryIO.Read.from("FLOAT_BUG.float_bug")).apply(ParDo.of(new DoFn<TableRow, TableRow>() {
#Override
public void processElement(ProcessContext c) throws Exception {
Object o = c.element().get("VHH");
if (o instanceof Double) {
System.out.println("Awesome. Got expected Double: " + o);
} else if (o instanceof Integer) {
System.out.println("Bummer. Got an Integer: " + o);
} else {
assert false;
}
}
}));
pipeline.run();
}
}
Running locally gives back a Double for every value. And that is what I would expect:
Awesome. Got expected Double: 2.0
Awesome. Got expected Double: 2.245
Awesome. Got expected Double: 1.773
Awesome. Got expected Double: 4.567
Awesome. Got expected Double: 1.342
However, running in the cloud using the Dataflow service gives back an Integer for the value 2.0:
Awesome. Got expected Double: 2.245
Awesome. Got expected Double: 1.342
Awesome. Got expected Double: 1.773
Awesome. Got expected Double: 4.567
Bummer. Got an Integer: 2
It should return a Double, not an Integer for 2.0
The observation is true. A pipeline, which reads input from BigQuery, may output data with a different type than the underlying data type in the BigQuery schema. As observed, the type may also vary from element to element.
This is an unfortunate consequence of the fact that Dataflow Service first exports the data from BigQuery to JSON-encoded files in Google Cloud Storage, and then it reads data from those files. JSON, obviously, doesn't preserve types. For example, a floating point number 2.0 would be encoded as a string "2", which would be read as an Integer in Java. This doesn't occur when executing pipelines with DirectPipelineRunner, because that runner reads from BigQuery directly.
Now, the easiest way to avoid these kinds of problems is via Number abstract class in Java. This is the superclass of classes like Double and Integer. It should be safe to interpret the result as a Number and then call doubleValue() method on it.
That said, going forward, I expect this behavior to change. The exact timeline is not known yet, but the behavior of the Dataflow Service should shortly match the local execution. A workaround via Number class should be correct either way.

ImageResizer: how do you use Instructions?

The documentation for ResizeSettings says:
"Replaced by the Instructions class"
http://documentation.imageresizing.net/docu/ImageResizer/ResizeSettings.htm
The documentation for Instructions says:
"The successor to ResizeSettings."
http://documentation.imageresizing.net/docu/ImageResizer/Instructions.htm
However, I cannot figure out how to use Instructions instead of ResizeSettings. I've tried
Google
Documentation (documentation.imageresizing.net)
Looking through the Object Browser for uses of Instructions
Searching ImageResizer.dll in .net Reflector for uses of Instructions
Decompiling all of ImageResizer.dll and searching for through the resulting code.
If Instructions replaces ResizeSettings, then how do I use it instead of ResizeSettings?
=== Edit - more detail:
This a way to use ResizeSettings:
public static Bitmap Resize(Bitmap bitmap, int maxHeight, int maxWidth)
{
var setting = new ResizeSettings
{
MaxHeight = maxHeight,
MaxWidth = maxWidth,
};
return ImageBuilder.Current.Build(bitmap, setting);
}
Reading that Instructions was a replacement for ResizeSettings, one of the first things I tried was this: (I was hoping ImageBuilder might have an overloaded Build method)
public static Bitmap Resize(Bitmap bitmap, int maxHeight, int maxWidth)
{
var instructions = new Instructions
{
Width = maxWidth,
Height = maxHeight,
Mode = FitMode.Max
};
return ImageBuilder.Current.Build(bitmap, instructions);
}
In an unexpected turn of events, the documentation is ahead of reality.
You can use the Instructions class, but for now you must convert it to a ResizeSettings instance first like so:
.Build(source, dest, new ResizeSettings(new Instructions("width=20")));
In the next major release, this will accept an Instructions class directly.

Using XNA Math in a DLL Class

I'm having a problem in using XNA Math in a DLL I'm creating. I have a class that is in a DLL and is going to be exported. It has a member variable of type XMVECTOR. In the class constructor, I try to initialize the XMVECTOR. I get a Access Violation in reading from reading location 0x0000000000
The code runs something like this:
class DLLClass
{
public:
DLLClass(void);
~DLLClass(void);
protected:
XMVECTOR vect;
XMMATRIX matr;
}
DLLClass::DLLClass(void)
{
vect = XMLoadFloat3(&XMFLOAT3(0.0f, 0.0f, 0.0f)); //this is the line causing the access violation
}
Note that this class is in a DLL that is going to be exported. I do not know if this will make a difference by just some further info.
Also while I'm at it, I have another question:
I also get the warning: struct '_XMMATRIX' needs to have dll-interface to be used by clients of class 'DLLClass'
Is this fatal? If not, what does it mean and how can I get rid of it? Note this DLLClass is going to be exported and the "clients" of the DLLClass is probably going to use the variable 'matr'.
Any help would be appreciated.
EDIT: just some further info: I've debugged the code line by line and it seems that the error occurs when the return value of XMLoadFloat3 is assigned to the vect.
This code is only legal if you are building with x64 native -or- if you use __aligned_malloc to ensure the memory for all instances of DLLClass are 16-byte aligned. x86 (32-bit) malloc and new only provide 8-byte alignment by default. You can 'get lucky' but it's not stable.
class DLLClass
{
public:
DLLClass(void);
~DLLClass(void);
protected:
XMVECTOR vect;
XMMATRIX matr;
}
See DirectXMath Programming Guide, Getting Started
You have three choices:
Ensure DLLClass is always 16-byte aligned
Use XMFLOAT4 and XMFLOAT4X4 instead and do explicit load/stores
Use the SimpleMath wrapper types in DirectX Tool Kit instead which handle the loads/stores for you.
You shouldn't take the address of an anonymous variable:
vect = XMLoadFloat3(&XMFLOAT3(0.0f, 0.0f, 0.0f));
You need
XMFLOAT3 foo(0.0f, 0.0f, 0.0f);
vect = XMLoadFloat3(&foo);

How to use ClearType with double buffering on Compact Framework?

When I draw a string into a buffer, the resulting output is not anti-aliased the way I'd expect. This code illustrates the problem... just put this in a standard smart device project's Form1.cs:
protected override void OnPaint(PaintEventArgs e)
{
Bitmap buffer = new Bitmap(Width, Height, PixelFormat.Format32bppRgb);
using(Graphics g = Graphics.FromImage(buffer))
{
g.Clear(Color.White);
g.DrawString("Hello, World", Font, new SolidBrush(Color.Black), 5, 5);
}
e.Graphics.DrawImage(buffer, 0, 0);
}
On the other hand, if I just draw the string into the Graphics object that was passed in with the PaintEventArgs, it renders in ClearType just as I'd expect.
I figure I've got to create my Graphics buffer in a way that makes it use font smoothing, but I don't see a way to do that.
Turns out to have been a simple problem. By removing the PixelFormat.Format32bppRgb it worked fine. Looks like you need to make sure your buffers have the same pixel formats...
Set the SmoothingMode property of your Graphics object:
g.SmoothingMode = SmoothingMode.AntiAlias;
You will have to use gdiplus.dll (there exists a few wrappers for this), but it is only available on Windows Mobile 6 Professional (not Standard).