Libgdx not using Opengl ES 2.0 - opengl-es-2.0

Preferably, I'd like to use OpenGL ES 2.0 for a new 3d game I started making. Anyway, I've been developing it on an Ubuntu PC (not top-of-the-line but decent) I bought in 2010.
Gdx.graphics.isGL20Available() returns false, and I'm quite sure my drivers support 3.3.0. Here is what I'm receiving from glxinfo:
name of display: :0.0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control,
GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_EXT_create_context_es2_profile,
GLX_ARB_create_context_robustness, GLX_ARB_multisample,
GLX_NV_float_buffer, GLX_ARB_fbconfig_float, GLX_EXT_framebuffer_sRGB
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_visual_info,
GLX_EXT_visual_rating, GLX_EXT_import_context, GLX_SGI_video_sync,
GLX_NV_swap_group, GLX_NV_video_out, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer,
GLX_SGI_swap_control, GLX_EXT_swap_control, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_NV_float_buffer,
GLX_ARB_fbconfig_float, GLX_EXT_fbconfig_packed_float,
GLX_EXT_texture_from_pixmap, GLX_EXT_framebuffer_sRGB,
GLX_NV_present_video, GLX_NV_copy_image, GLX_NV_multisample_coverage,
GLX_NV_video_capture, GLX_EXT_create_context_es2_profile,
GLX_ARB_create_context_robustness
GLX version: 1.4
GLX extensions:
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_SGIX_fbconfig,
GLX_SGIX_pbuffer, GLX_SGI_video_sync, GLX_SGI_swap_control,
GLX_EXT_swap_control, GLX_EXT_texture_from_pixmap, GLX_ARB_create_context,
GLX_ARB_create_context_profile, GLX_EXT_create_context_es2_profile,
GLX_ARB_create_context_robustness, GLX_ARB_multisample,
GLX_NV_float_buffer, GLX_ARB_fbconfig_float, GLX_EXT_framebuffer_sRGB,
GLX_ARB_get_proc_address
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GT 220/PCI/SSE2
OpenGL version string: 3.3.0 NVIDIA 260.19.06
OpenGL shading language version string: 3.30 NVIDIA via Cg compiler
OpenGL extensions:
GL_ARB_blend_func_extended, GL_ARB_color_buffer_float,
GL_ARB_compatibility, GL_ARB_copy_buffer, GL_ARB_depth_buffer_float,
GL_ARB_depth_clamp, GL_ARB_depth_texture, GL_ARB_draw_buffers,
GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex,
GL_ARB_draw_instanced, GL_ARB_ES2_compatibility,
GL_ARB_explicit_attrib_location, GL_ARB_fragment_coord_conventions,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow,
GL_ARB_fragment_shader, GL_ARB_framebuffer_object,
GL_ARB_framebuffer_sRGB, GL_ARB_geometry_shader4,
GL_ARB_get_program_binary, GL_ARB_half_float_pixel,
GL_ARB_half_float_vertex, GL_ARB_imaging, GL_ARB_instanced_arrays,
GL_ARB_map_buffer_range, GL_ARB_multisample, GL_ARB_multitexture,
GL_ARB_occlusion_query, GL_ARB_occlusion_query2,
GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite,
GL_ARB_provoking_vertex, GL_ARB_robustness, GL_ARB_sample_shading,
GL_ARB_sampler_objects, GL_ARB_seamless_cube_map,
GL_ARB_separate_shader_objects, GL_ARB_shader_bit_encoding,
GL_ARB_shader_objects, GL_ARB_shading_language_100, GL_ARB_shadow,
GL_ARB_sync, GL_ARB_texture_border_clamp, GL_ARB_texture_buffer_object,
GL_ARB_texture_compression, GL_ARB_texture_compression_rgtc,
GL_ARB_texture_cube_map, GL_ARB_texture_cube_map_array,
GL_ARB_texture_env_add, GL_ARB_texture_env_combine,
GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3,
GL_ARB_texture_float, GL_ARB_texture_gather,
GL_ARB_texture_mirrored_repeat, GL_ARB_texture_multisample,
GL_ARB_texture_non_power_of_two, GL_ARB_texture_query_lod,
GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_texture_rgb10_a2ui,
GL_ARB_texture_swizzle, GL_ARB_timer_query, GL_ARB_transform_feedback2,
GL_ARB_transpose_matrix, GL_ARB_uniform_buffer_object,
GL_ARB_vertex_array_bgra, GL_ARB_vertex_array_object,
GL_ARB_vertex_buffer_object, GL_ARB_vertex_program, GL_ARB_vertex_shader,
GL_ARB_vertex_type_2_10_10_10_rev, GL_ARB_viewport_array,
GL_ARB_window_pos, GL_ATI_draw_buffers, GL_ATI_texture_float,
GL_ATI_texture_mirror_once, GL_S3_s3tc, GL_EXT_texture_env_add,
GL_EXT_abgr, GL_EXT_bgra, GL_EXT_bindable_uniform, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array,
GL_EXT_Cg_shader, GL_EXT_depth_bounds_test, GL_EXT_direct_state_access,
GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample,
GL_EXTX_framebuffer_mixed_formats, GL_EXT_framebuffer_object,
GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4,
GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
GL_EXT_multi_draw_arrays, GL_EXT_packed_depth_stencil,
GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_pixel_buffer_object,
GL_EXT_point_parameters, GL_EXT_provoking_vertex, GL_EXT_rescale_normal,
GL_EXT_secondary_color, GL_EXT_separate_shader_objects,
GL_EXT_separate_specular_color, GL_EXT_shadow_funcs,
GL_EXT_stencil_two_side, GL_EXT_stencil_wrap, GL_EXT_texture3D,
GL_EXT_texture_array, GL_EXT_texture_buffer_object,
GL_EXT_texture_compression_latc, GL_EXT_texture_compression_rgtc,
GL_EXT_texture_compression_s3tc, GL_EXT_texture_cube_map,
GL_EXT_texture_edge_clamp, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_integer, GL_EXT_texture_lod, GL_EXT_texture_lod_bias,
GL_EXT_texture_mirror_clamp, GL_EXT_texture_object,
GL_EXT_texture_shared_exponent, GL_EXT_texture_sRGB,
GL_EXT_texture_swizzle, GL_EXT_timer_query, GL_EXT_transform_feedback2,
GL_EXT_vertex_array, GL_EXT_vertex_array_bgra, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_KTX_buffer_region, GL_NV_blend_square,
GL_NV_conditional_render, GL_NV_copy_depth_to_color, GL_NV_copy_image,
GL_NV_depth_buffer_float, GL_NV_depth_clamp, GL_NV_explicit_multisample,
GL_NV_fence, GL_NV_float_buffer, GL_NV_fog_distance,
GL_NV_fragment_program, GL_NV_fragment_program_option,
GL_NV_fragment_program2, GL_NV_framebuffer_multisample_coverage,
GL_NV_geometry_shader4, GL_NV_gpu_program4, GL_NV_gpu_program4_1,
GL_NV_half_float, GL_NV_light_max_exponent, GL_NV_multisample_coverage,
GL_NV_multisample_filter_hint, GL_NV_occlusion_query,
GL_NV_packed_depth_stencil, GL_NV_parameter_buffer_object,
GL_NV_parameter_buffer_object2, GL_NV_pixel_data_range,
GL_NV_point_sprite, GL_NV_primitive_restart, GL_NV_register_combiners,
GL_NV_register_combiners2, GL_NV_shader_buffer_load,
GL_NV_texgen_reflection, GL_NV_texture_barrier,
GL_NV_texture_compression_vtc, GL_NV_texture_env_combine4,
GL_NV_texture_expand_normal, GL_NV_texture_multisample,
GL_NV_texture_rectangle, GL_NV_texture_shader, GL_NV_texture_shader2,
GL_NV_texture_shader3, GL_NV_transform_feedback,
GL_NV_transform_feedback2, GL_NV_vdpau_interop, GL_NV_vertex_array_range,
GL_NV_vertex_array_range2, GL_NV_vertex_buffer_unified_memory,
GL_NV_vertex_program, GL_NV_vertex_program1_1, GL_NV_vertex_program2,
GL_NV_vertex_program2_option, GL_NV_vertex_program3,
GL_NVX_conditional_render, GL_NVX_gpu_memory_info,
GL_SGIS_generate_mipmap, GL_SGIS_texture_lod, GL_SGIX_depth_texture,
GL_SGIX_shadow, GL_SUN_slice_accum
And a lot more that does not have to do with the version string and/or extensions. I have all the extensions needed for Opengl ES 2.0 and my driver is updated to 3.3.0 (OpenGL 3.0->2.0 ES approximately). Is it because my software rasterizer is old? If so, there are probably others in the same situation (I don't even think there are that many Windows installations which are updated past 1.1), and I'd like to support them too. What is the best possible solution?

Gdx.graphics.isGL20Available() doesn't tell you that your hardware supports GL ES 2.0. It tells you that gl20 graphic context has been initialized.
You must request libGDX to use GL20 explicitly. Example for Lwjgl Backend:
LwjglApplicationConfiguration config = new LwjglApplicationConfiguration();
config.title = "Game";
config.width = 800;
config.height = 480;
config.useGL20 = true; //this is important
new LwjglApplication(new YourGame(), config);

Related

How to enable synchronization 2 when using Vulkan API 1.2?

Up until now, I used the the latest Vulkan API available on my development machine (1.3.201).
VK_CHECK(vkEnumerateInstanceVersion(&_apiVersion)); // returns 1.3.201
After some struggles I managed to use synchronization 2 as a core feature (functions without KHR).
But now, I wish to work with API version 1.2 to be compatible with more devices.
So when initializing VkApplicationInfo I set a lower version to the apiVersion field:
appInfo.apiVersion = VK_MAKE_API_VERSION(0, 1, 2, 0);
In order to use synchronization 2 extension, I pass the following extensions & layers to vkCreateInstance:
Extensions:
VK_KHR_surface
VK_KHR_win32_surface
VK_KHR_get_physical_device_properties2
VK_KHR_get_surface_capabilities2
VK_EXT_debug_utils
Layers:
VK_LAYER_KHRONOS_synchronization2
VK_LAYER_KHRONOS_validation
And these are the extensions and layers I pass to vkCreateDevice:
VK_KHR_create_renderpass2
VK_KHR_synchronization2
VK_KHR_swapchain
Layers:
VK_LAYER_KHRONOS_validation
This is my device Initialization code:
VkPhysicalDeviceFeatures deviceFeatures{ };
VkPhysicalDeviceVulkan12Features features12{};
features12.sType = VkStructureType::VK_STRUCTURE_TYPE_PHYSICAL_DEVICE_VULKAN_1_2_FEATURES;
features12.timelineSemaphore = true;
VkDeviceCreateInfo deviceCreateInfo
{
.sType = VkStructureType::VK_STRUCTURE_TYPE_DEVICE_CREATE_INFO,
.pNext = &features12,
.flags = VkDeviceCreateFlags{},
.queueCreateInfoCount = static_cast<uint32_t>(createDesc.queu_create_info.size()),
.pQueueCreateInfos = createDesc.queu_create_info.data(),
.enabledLayerCount = static_cast<uint32_t>(config.required_layers.size()),
.ppEnabledLayerNames = config.required_layers.data(),
.enabledExtensionCount = static_cast<uint32_t>(config.required_extentions.size()),
.ppEnabledExtensionNames = config.required_extentions.data(),
.pEnabledFeatures = &deviceFeatures
};
Yet, when calling vkCmdPipelineBarrier2KHR I get a validation error:
Validation Error: [ VUID-vkCmdPipelineBarrier2-synchronization2-03848 ] Object 0: handle = 0x20b77c136f0, type = VK_OBJECT_TYPE_COMMAND_BUFFER; | MessageID = 0xa060404 | vkCmdPipelineBarrier2KHR(): Synchronization2 feature is not enabled The Vulkan spec states: The synchronization2 feature must be enabled (https://vulkan.lunarg.com/doc/view/1.3.236.0/windows/1.3-extensions/vkspec.html#VUID-vkCmdPipelineBarrier2-synchronization2-03848)

Vulkan validation warning catch-22 about VK_KHR_portability_subset on MoltenVK

I'm using Vulkan 1.2.170 with MoltenVK (and GLFW) on Big Sur (mid 2014 15" Retina). I created the instance with VK_LAYER_KHRONOS_validation and when I call vkCreateDevice I get the warning
VUID-VkDeviceCreateInfo-pProperties-04451(ERROR / SPEC): msgNum: 976972960 - Validation Error:
[ VUID-VkDeviceCreateInfo-pProperties-04451 ] Object 0: handle = 0x10cfaad00,
type = VK_OBJECT_TYPE_PHYSICAL_DEVICE; | MessageID = 0x3a3b6ca0 | vkCreateDevice:
VK_KHR_portability_subset must be enabled because physical device VkPhysicalDevice 0x10cfaad00[]
supports it The Vulkan spec states: If the [VK_KHR_portability_subset] extension is included in
pProperties of vkEnumerateDeviceExtensionProperties, ppEnabledExtensions must include
"VK_KHR_portability_subset".
Okay, fine, I add it to the extensions parameter as the only extension. Then it says
Missing extension required by the device extension VK_KHR_portability_subset:
VK_KHR_get_physical_device_properties2.
and segfaults. If I add VK_KHR_get_physical_device_properties2, it crashes saying it doesn't exist, which is true (vkEnumerateDeviceExtensionProperties doesn't return it).
Is this a bug or is there some set of extensions it will accept?
If it helps, the supported extensions are
VK_KHR_16bit_storage VK_KHR_8bit_storage VK_KHR_bind_memory2 VK_KHR_create_renderpass2 VK_KHR_dedicated_allocation VK_KHR_depth_stencil_resolve VK_KHR_descriptor_update_template VK_KHR_device_group VK_KHR_driver_properties VK_KHR_external_fence VK_KHR_external_memory VK_KHR_external_semaphore VK_KHR_get_memory_requirements2 VK_KHR_image_format_list VK_KHR_maintenance1 VK_KHR_maintenance2 VK_KHR_maintenance3 VK_KHR_multiview VK_KHR_portability_subset VK_KHR_push_descriptor VK_KHR_relaxed_block_layout VK_KHR_sampler_mirror_clamp_to_edge VK_KHR_sampler_ycbcr_conversion VK_KHR_shader_draw_parameters VK_KHR_shader_float16_int8 VK_KHR_storage_buffer_storage_class VK_KHR_swapchain VK_KHR_swapchain_mutable_format VK_KHR_timeline_semaphore VK_KHR_uniform_buffer_standard_layout VK_KHR_variable_pointers VK_EXT_debug_marker VK_EXT_descriptor_indexing VK_EXT_fragment_shader_interlock VK_EXT_hdr_metadata VK_EXT_host_query_reset VK_EXT_image_robustness VK_EXT_inline_uniform_block VK_EXT_memory_budget VK_EXT_private_data VK_EXT_robustness2 VK_EXT_scalar_block_layout VK_EXT_shader_viewport_index_layer VK_EXT_subgroup_size_control VK_EXT_texel_buffer_alignment VK_EXT_vertex_attribute_divisor VK_AMD_gpu_shader_half_float VK_AMD_negative_viewport_height VK_AMD_shader_trinary_minmax VK_INTEL_shader_integer_functions2 VK_GOOGLE_display_timing VK_NV_glsl_shader
Turns out VK_KHR_get_physical_device_properties2 is an instance extension, not a device extension, so it's passed to vkCreateInstance like so:
VkInstanceCreateInfo instCreateInfo;
const char* instExtension = "VK_KHR_get_physical_device_properties2";
instCreateInfo.enabledExtensionCount = 1;
instCreateInfo.ppEnabledExtensionNames = &instExtension;
// etcetera
VkInstance instance;
vkCreateInstance(&instCreateInfo, nullptr, &instance);
// ...
VkDeviceCreateInfo deviceCreateInfo;
const char* deviceExtension = "VK_KHR_portability_subset";
deviceCreateInfo.enabledExtensionCount = 1;
deviceCreateInfo.ppEnabledExtensionNames = &deviceExtension;
// etcetera
VkDevice device;
vkCreateDevice(physicalDevice, &deviceCreateInfo, nullptr, &device);
A bit late to answer this question. But I faced the same issue and used vk::enumerateInstanceExtensionProperties() and .enumerateDeviceExtensionProperties() to check if the extensions were present.

Multiple (4x) SPI device on rasberry pi 2 running windows 10 iot

I had successfully communicate single SPI device (MCP3008). Is that possible running multiple (4x) SPI device on raspberry pi 2 with windows 10 iot?
I'm thinking to manually connect the CS(chip select) line and activate it before calling spi function and in-active it after done the spi function.
Can it be work on windows 10 iot?
How about configure the spi chip select pin? Change the pin number during the SPI initialization? Is that possible?
Any smarter way to use multiple (4 x MCP3008 ) SPI device on windows 10 iot?
(I'm planning to monitor 32 Analogue signal which will be input to my raspberry pi 2 running windows 10 iot)
Thanks a lot!
Of course you can use as many as you want (as many GPIO pins).
You just have to indicate the device to which you are calling.
First, set the configuration of the SPI for example, using chip select line 0
settings = new SpiConnectionSettings(0); //chip select line 0
settings.ClockFrequency = 1000000;
settings.Mode = SpiMode.Mode0;
String spiDeviceSelector = SpiDevice.GetDeviceSelector();
devices = await DeviceInformation.FindAllAsync(spiDeviceSelector);
_spi1 = await SpiDevice.FromIdAsync(devices[0].Id, settings);
You can not use this pin in further actions! So now you should configure the output ports using GpioPin class, which you will use to indicate the device.
GpioPin_19 = IoController.OpenPin(19);
GpioPin_19.Write(GpioPinValue.High);
GpioPin_19.SetDriveMode(GpioPinDriveMode.Output);
GpioPin_26 = IoController.OpenPin(26);
GpioPin_26.Write(GpioPinValue.High);
GpioPin_26.SetDriveMode(GpioPinDriveMode.Output);
GpioPin_13 = IoController.OpenPin(13);
GpioPin_13.Write(GpioPinValue.High);
GpioPin_13.SetDriveMode(GpioPinDriveMode.Output);
Always before transfer indicate device: (example method)
private byte[] TransferSpi(byte[] writeBuffer, byte ChipNo)
{
var readBuffer = new byte[writeBuffer.Length];
if (ChipNo == 1) GpioPin_19.Write(GpioPinValue.Low);
if (ChipNo == 2) GpioPin_26.Write(GpioPinValue.Low);
if (ChipNo == 3) GpioPin_13.Write(GpioPinValue.Low);
_spi1.TransferFullDuplex(writeBuffer, readBuffer);
if (ChipNo == 1) GpioPin_19.Write(GpioPinValue.High);
if (ChipNo == 2) GpioPin_26.Write(GpioPinValue.High);
if (ChipNo == 3) GpioPin_13.Write(GpioPinValue.High);
return readBuffer;
}
From: https://projects.drogon.net/understanding-spi-on-the-raspberry-pi/
The Raspberry Pi only implements master mode at this time and has 2 chip-select pins, so can control 2 SPI devices. (Although some devices have their own sub-addressing scheme so you can put more of them on the same bus)
I've successfully used 2 SPI devices in the DeviceTester project and Breathalyzer project within Jared Bienz's IoT Devices GitHub repo.
Notice, that in each project, the SPI interface descriptor is declared explicitly in the ControllerName property for the ADC and Display used in both of these projects. Detailed information around the Breathalyzer project can be found on my blog.
// ADC
// Create the manager
adcManager = new AdcProviderManager();
adcManager.Providers.Add(
new MCP3208()
{
ChipSelectLine = 0,
ControllerName = "SPI1",
});
// Get the well-known controller collection back
adcControllers = await adcManager.GetControllersAsync();
// Create the display
var disp = new ST7735()
{
ChipSelectLine = 0,
ClockFrequency = 40000000, // Attempt to run at 40 MHz
ControllerName = "SPI0",
DataCommandPin = gpioController.OpenPin(12),
DisplayType = ST7735DisplayType.RRed,
ResetPin = gpioController.OpenPin(16),
Orientation = DisplayOrientations.Portrait,
Width = 128,
Height = 160,
};

How to configure OLinuXino Lime UART's using DTB

How do I configure the UART's in the OLinuXino Lime using a DTB file? I'm using the image from http://eewiki.net/display/linuxonarm/A10-OLinuXino-LIME.
UART 0 is already configured. This is the relevant part from the DTS file, if I understand correctly:
uart0: serial#01c28000 {
pinctrl-names = "default";
pinctrl-0 = <&uart0_pins_a>;
status = "okay";
};
From http://linux-sunxi.org/Memory_map I can get the memory space for the other UART's. But where do I get the syntax for the pinctrl-0 field, for instance?
Can we configure the hardware with the DTB file only with no need for Allwinner's FEX file?
In an "ideal" situation, should the DTB files be configured by the hardware manufacturer or should they be configured by the developer (is there a manual)?
We can either use Allwinner's FEX file or Open Firmware's Device Tree (DT).
Add these lines to the DT source file (DTS) and compile with dtc.
uart2: serial#01c28800 {
pinctrl-names = "default";
pinctrl-0 = <&uart2_pins_a>;
status = "okay";
};

How to convert PCM audio to TrueSpeech using NAudio

I am trying to convert a PCM 8 bit 8 KHz Mono file to DSP TrueSpeech 1 bit 8 kHz Mono using NAudio, and I get the following error:
A first chance exception of type 'NAudio.MmException' occurred in NAudio.dll
AcmNotPossible calling acmStreamOpen
I understand that there may be an intermediate step that I am missing -- any insight would be appreciated. Here is the code I am using:
WaveFormat outWaveFormat;
outWaveFormat = new TrueSpeechWaveFormat();
Debug.Print("Sample Rate: " + outWaveFormat.SampleRate); //displays "8000"
Debug.Print("Bit Rate: " + outWaveFormat.BitsPerSample); //displays "1"
FileInfo f = new FileInfo(inputFile);
String outputFileName = this.txtDest.Text + #"\" + f.Name;
using (WaveFileReader reader = new WaveFileReader(inputFile))
{
try
{
using (WaveStream convertedStream = new WaveFormatConversionStream (outWaveFormat, reader))
{
WaveFileWriter.CreateWaveFile(outputFileName, convertedStream);
}
}
catch (Exception ex)
{
Debug.Print(ex.Message);
}
}
Two reasons this might be happening:
you don't have a TrueSpeech encoder. I don't think newer versions of Windows include TrueSpeech anymore - it is effectively obsolete. You can run the NAudioDemo application to see what ACM codecs are on your machine.
your input format cannot convert to the target format in one step. Are you sure your input is PCM. Also I would expect that the TrueSpeech codec wants 16 bit input not 8 bit.
There is a third reason this can happen, although I don't think it affects TrueSpeech and that is that WaveFileWriter.CreateWaveFile assumes that AverageBytesPerSecond is an exact multiple of BlockAlign, which is not always true.