MTLTextureDescriptor conundrum - objective-c

I trying to convert the "Metal By Example" App -TextRendering- for IOS to OSX. Unfortunately, Xcode tells me that the MTLTextureDescriptor's storageMode needs to be set to MTLStorageModePrivate and when I do: replaceRegion:mipmapLevel:withBytes:bytesPerRow: throws "failed assertion `CPU access for textures with MTLResourceStorageModePrivate storage mode is disallowed."
MTLTextureDescriptor *textureDesc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:AAPLDepthPixelFormat
width:MBEFontAtlasSize
height:MBEFontAtlasSize
mipmapped:NO];
textureDesc.storageMode = MTLStorageModePrivate;
textureDesc.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite;
textureDesc.usage = MTLTextureUsageShaderRead;//MTLTextureUsageRenderTarget;
MTLRegion region = MTLRegionMake2D(0, 0, MBEFontAtlasSize, MBEFontAtlasSize);
_fontTexture = [_device newTextureWithDescriptor:textureDesc];
[_fontTexture setLabel:#"Font Atlas"];
[_fontTexture replaceRegion:region mipmapLevel:0 withBytes:_fontAtlas.textureData.bytes bytesPerRow:MBEFontAtlasSize];
Any help would be tremendously appreciated !

On macOS, you actually have to explicitly synchronize resources between CPU/RAM and GPU (because the Mac might have a dedicated GPU with its own memory, in contrast to the shared memory model on iOS).
For that, you need to set the storage mode to managed and use a MTLBlitCommandEncoder to copy memory between the devices (see the documentation of managed).

Related

Out of memory error when creating large lists in memory vb.net

I need to work with a large amount of data in memory. I am loading it from an SQLite database on SSD and using EF6 to construct business objects from it. As soon as the Process Memory window shows usage hitting 3.2GB I get an Out Of Memory exception from Entity Framework.
For now I am just loading into lists. I had read somewhere that there were limits on the sizes of list structures so instead of using one big list I have created multiple simple DataBlock container objects to each hold a chunk of the required data. It doesnt seem to make any difference. The PC has plenty of RAM (16GB). I am using a new context to populate each DataBlock and then destroying them.
For Each DataBlock In DataBlocks
Using Context As New mainEntities
Dim FirstRecordTimeUTC As Long = TimeFunctions.ConvertDateToUTC(DataBlock.StartDate)
Dim LastRecordTimeUTC As Long = TimeFunctions.ConvertDateToUTC(DataBlock.EndDate)
Dim CandlesInRange = (From Cand In Context.HistoricalRecords
Where Cand.time_close_utc >= FirstRecordTimeUTC
Where Cand.time_close_utc <= LastRecordTimeUTC
Order By Cand.id
Select Cand).ToList
DataBlock.AllCandles = CandlesInRange
Dim RTsInRange = (From Cand In Context.HistoricalRecordRealTimes
Where Cand.time_close_utc >= FirstRecordTimeUTC
Where Cand.time_close_utc <= LastRecordTimeUTC
Order By Cand.id
Select Cand).ToList
DataBlock.AllRTs = RTsInRange
Dim StatsInRange = (From Cand In Context.InstrumentStats
Where Cand.time_close_utc >= FirstRecordTimeUTC
Where Cand.time_close_utc <= LastRecordTimeUTC
Order By Cand.id
Select Cand).ToList
DataBlock.AllStats = StatsInRange
End Using
Next
The compiler platform is set to 'Any CPU'. System is as follows:
Win 10 64, VS 2017, 16GB RAM, Ryzen 5 3600
Any thoughts on what I am doing wrong would be much appreciated.
Visual Vincent has answered this question best in the comments:
Projects targeting AnyCPU and .NET 4.5 and higher by default use the
configuration "AnyCPU 32-bit preferred". This means that your
application will always be compiled as a 32-bit app (which are by
default limited to 4 GB of RAM) even on a 64-bit system, the exception
being when you're running on an ARM processor (in which case it will
be compiled to ARM).
You need to target the classic AnyCPU configuration in order for your
app to compile as a 64-bit app on your 64-bit system. Right-click your
project in the Solution Explorer and press Properties, go to the
Compile tab and uncheck the box labeled Prefer 32-bit (it might be
under Advanced Compile Options as well... I don't have VS in front of
me at the moment).
He also referenced another question that explains this setting:
What is the purpose of the “Prefer 32-bit” setting in Visual Studio and how does it actually work?

STM32F103 SPI different pins does not work

I am currently working on a project with LoRaWAN technology using STM32F103C8T6 microcontroller. For LoRa I am using SPI in Full-Duplex Master mode (spi1 specifically) and in CubeIDE when you activate SPI1, automatically pins PA5, PA6 and PA7 are activated (ver1):
However, PCB is designed and printed and those pins are unfortunately busy. Because, before it was planned to use other SPI1 pins (PB3, PB4, PB5) (ver2):
So, when I use ver1, all is good, LoRa connects to server and sends data without a problem. However, when I use ver2, it does not work at all. I debugged to find where is problem and found out that, SPI read fails (when version of LoRa is read, it returns 0). Thus, ASSERT fires and code is stuck in infinite loop. I could not find any reference of difference of SPI pins in the internet.
Can anyone explain the difference of these pins? And is it possible to use ver2? Thanks beforehand.
P.S. I am using HAL Library + LMIC library (for LoRa) and the configuration of SPI are the same for both ver1 and ver2. Here is code of configuration, if needed:
void MX_SPI1_Init(void)
{
hspi1.Instance = SPI1;
hspi1.Init.Mode = SPI_MODE_MASTER;
hspi1.Init.Direction = SPI_DIRECTION_2LINES;
hspi1.Init.DataSize = SPI_DATASIZE_8BIT;
hspi1.Init.CLKPolarity = SPI_POLARITY_LOW;
hspi1.Init.CLKPhase = SPI_PHASE_1EDGE;
hspi1.Init.NSS = SPI_NSS_SOFT;
hspi1.Init.BaudRatePrescaler = SPI_BAUDRATEPRESCALER_64;
hspi1.Init.FirstBit = SPI_FIRSTBIT_MSB;
hspi1.Init.TIMode = SPI_TIMODE_DISABLE;
hspi1.Init.CRCCalculation = SPI_CRCCALCULATION_DISABLE;
hspi1.Init.CRCPolynomial = 10;
if (HAL_SPI_Init(&hspi1) != HAL_OK)
{
Error_Handler();
}
}
P.S.S: I also gave this question in electronics stackexchange, but there was no answer there, so I decided to share the question here too.
After lots of tries, I found out that, remapped SPI1 does not work together with I2C1, because of I2C1-SMBA pin overlap with SP1 MOSI pin (PB5), even if you are not using SMBA. You can find about that here: STM32F103x8 errata chapter 2.8.7
So, I guess, I will use I2C2 for avoiding collision. The only change I should make on PCB would be redirecting I2C1 pins to I2C2 (2 pins), which is way better than redirecting SPI1 pins (3 pins) and other elements occupying ver1 (also 3) pins.

debugging vxworks loadModule failure

I have a VxWorks Image Project project without a File-System on MPC5200B, using DIAB tool-chain.
I need to dynamically load a module from flash.
I allocated memory on my stack char myTemporaryModuleData[MAX_MODULE_SIZE]
and filled it with the module data from Flash.
(checked that the binary data is intact)
then i create a memDevice('/mem/mem01', myTemporaryModuleData, moduleReadLength)
open the psuedo-stream int fdModuleData = open("/mem/mem01", O_RDONLY, 777);
when i run int mId = loadModule(fdModuleData, LOAD_ALL_SYMBOLS);
did not see anything in the console after running loadModule();
but mId = 0 which indicates failure :(.
getErrno() returned 0x3D0004 (S_objLib_OBJ_TIMEOUT)
NOTE: it didn't take long at all to fail => timeout?
i tried replacing the module with a simple void foo() { printf(...); } module but still failes with same issue.
tried loading an .out instead of .o
unfortunately, nothing got me nowhere,
How can i know what caused it to fail? (log, last_error, anything i should check?)
FOUND IT.
Apparently, it was a mistake in the data read from the flash.
What I can contribute is that 'loadModule()' from memDrv device is possible and working.

How do I get the MAC address using vxWorks6.8 API in real-time process?

I know endFindByName() and muxIoctl(),but these two functions depend on "muxLib.h" and END_OBJ depend on "end.h".These two header files only be used in kernel mode.
Not 100% sure, but try to use ioctl() with SIOCGIFLLADDR:
sock = socket (AF_INET, SOCK_DGRAM, 0);
ioctl (sock, SIOCGIFLLADDR, &ifr);
close (sock);
in ifr.ifr_ifru.ifru_addr is your mac.

unable to use f_read() and f_lseek() in Fatfs

I'm trying to connect to a 2GB sd card class 6 with stm32f091cctx MCU via SPI. Using fatFs library ver. R0.13a I'm able to mount the drive and open the file with f_mount and f_open functions. But when it comes to reading from file, it just freezes somewhere in f_read function. Also when I try to change the position of pointer with f_lseek, again it freezes. f_lseek works only when I write it as: f_lseek(&MyFile, 0).
This part of my code is as below:
if(FATFS_LinkDriver(&SD_Driver, SDPath) == 0)
{
f_mount(&SDFatFs, (TCHAR const*)SDPath, 1);
f_open(&MyFile, "SAMPLE1.WAV", FA_READ);
f_lseek(&MyFile, 200);
f_read(&MyFile, rtext, 1000, (UINT*)&bytesread);
}
You are probably run out of heap size and go to HardFault exception.
You can increase HEAP size from CubeMX -> Project Setting or directly from **_startup.s file.
PS: Print something in HardFault_Handler and Error_Handler function to see when something goes wrong.