I'm developing an app that shares D3D11 texture between processes.
I'm developing on a medium-high end PC
Widows10, I7 10700F and Geforce RTX 2070 as my developing my machine here is what is reported in task manager for CPU and GPU USAGE.
APP_NAME | CPU | GPU |
APP1.exe | 0.3% | 5% |
APP2.exe | 0.7% | 4% |
DWM | 0.8% | 2.5% |
Now on a Windows11,, i3 1115g4 with integrated graphics task manager shows
APP_NAME | CPU | GPU |
APP1.exe | 9% | 26% |
APP2.exe | 8% | 23% |
DWM | 28% | 32% |
System | 10% | 5% |
This is a big difference where the GPU is almost max out and CPU is really high which is expected given it only has 2 cores and total max of 4 threads and it has integrated graphics.
But why is DWM(Desktop Windows Manager) using so much CPU AND GPU(what is it doing with those resources) compared to the higher end PC.
Note the app works just fine with no hiccups just that app is still in early development so there will be more processing to be added on GPU and CPU and they are almost maxed out. I'm aware that my app specs require a higher PC but I wonder if there is something wrong with my app architecture
App architecture
App1 is a borderless window that covers one screen where I render to offscreen texture which is then rendered on its swapchain. This texture is created D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX and D3D11_RESOURCE_MISC_SHARED_NTHANDLE.
App2 is another window in another screen that opens the shared texture handle and its composed with its scene and then presented on its swapchain.
Related
On two PCs, the exact same data is exported from “Cinem4D” with the “Redshift” renderer.
Comparing the two, one uses the GPU at 100% while the other uses very little (it uses about the same amount of GPU memory).
Cinema4D, Redshift and GPU driver versions are the same.
GPU is RTX3060
64GB memory
OS is windows 10
M2.SSD
The only difference is the CPU.
12th Gen intel core i9-12900K using GPU at 100%
AMD Ryzen 9 5950 16 Core on the other
is.
Why is the GPU utilization so different?
Also, is it possible to adjust the PC settings to use 100%?
Azure SQL Database Managed Instance can be created on two different hardware generations Gen5 and Gen4 with the following differences:
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-managed-instance-resource-limits#hardware-generation-characteristics
Are there any guidelines in what scenarios should be choose Gen4 or Gen5?
Gen 5 is better for some workloads while Gen 4 is better for the others. However, in the most cases, the primary choice should be Gen5, unless if bigger core/memory ratio or the difference between physical/logical cores make big difference.
Gen 5 has network acceleration, so in most cases it should provide better IO bandwidth to remote storage on General Purpose than Gen 4, which might be the most biggest bottleneck in your workload.
Gen 5 is a newer hardware configuration than Gen 4, hence the Gen5 processors are Intel Haswell instead of Intel Broadwell. However, Gen5 uses hyperthreading and a vCore on Gen 5 is a logical processor - this might make some difference, but you would need to try and test. The vCore is the same price on both HW gens.
Gen 5 uses faster local SSD disks (fast NVMe SSD) than Gen 4, so in Business Critical case there should be an advantage for Gen 5. In both cases tempdb is placed on local SSD both in General Purpose and Business Critical, so workload that are dependent on tempdb would run faster.
Gen 4 has bigger memory/core ratio than Gen5 - 7 on Gen4 vs 5.1 on Gen5
Gen4 has only 8-24 cores range with proportional memory 56-178GB, while Gen5 can go up to 80 cores. Also, new configurations such as SKU will less than 8 cores will probably be available only on Gen5 hardware.
Gen 4 is no longer available for new purchases.
Note that with GEN 5 General Purpose you have to buy 2 cores at a minimum, Gen 4 you could buy 1 core. Price per core has not changed, so your total pricing has doubled.
The same with Business Critical. With Gen 4 the minimum cores are 2, while Gen 5 the minimum cores are 4. Again, this is a doubling of costs. This is particularly shocking if you wanted to go from General Purpose to Business Critical because the core costs are already about double.
The other killer now in Business Critical in Gen 5 hardware is that the max # of databases has STAYED at 50. They double your costs and keep you at 50 databases! There is no reason that Business Critical can't start at 2 cores like it does for Gen 4....
Even though not every detail is relevant for this question, I will list my setup nonetheless:
NUCLEO_F746ZG microcontroller board (https://os.mbed.com/platforms/ST-Nucleo-F746ZG/).
I run mbed CLI (https://os.mbed.com/docs/v5.7/tools/arm-mbed-cli.html) to program the chip.
My OS is Windows 10, 64-bit
To compile my code and flash the binary to the chip, I issue the following command in my cmd terminal:
> mbed compile -t GCC_ARM -m NUCLEO_F746ZG --flash
I get the following output:
...
+------------------+-------+-------+-------+
| Module | .text | .data | .bss |
+------------------+-------+-------+-------+
| [fill] | 130 | 4 | 10 |
| [lib]\c.a | 24965 | 2472 | 89 |
| [lib]\gcc.a | 3120 | 0 | 0 |
| [lib]\misc | 252 | 16 | 28 |
| mbed-os\drivers | 658 | 4 | 100 |
| mbed-os\features | 74 | 0 | 12556 |
| mbed-os\hal | 2634 | 4 | 66 |
| mbed-os\platform | 2977 | 4 | 270 |
| mbed-os\rtos | 15887 | 168 | 5989 |
| mbed-os\targets | 16013 | 4 | 1052 |
| source\main.o | 244 | 4 | 84 |
| Subtotals | 66954 | 2680 | 20244 |
+------------------+-------+-------+-------+
Total Static RAM memory (data + bss): 22924 bytes
Total Flash memory (text + data): 69634 bytes
Image: .\BUILD\NUCLEO_F746ZG\GCC_ARM\nucleo_f746zg_demo.bin
[mbed] Detected "NUCLEO_F746ZG" connected to "E:" and using com port "COM10"
1 file(s) copied.
I'm particularly interested in the last lines, where the actual flashing of the chip takes place:
Image: .\BUILD\NUCLEO_F746ZG\GCC_ARM\nucleo_f746zg_demo.bin
[mbed] Detected "NUCLEO_F746ZG" connected to "E:" and using com port "COM10"
1 file(s) copied.
I know from previous experience (before mbed CLI existed) that there is a lot going on to flash a binary to a chip. For example, I had to startup openocd, pass it a configuration file of the programmer (eg. stlink-v2-1.cfg) and a configuration file of the target board (eg. nucleo_f746zg.cfg). At last, I had to hand over the binary to openocd via a Telnet-session or a GDB-session. Everything is described extensively here: How to use the GDB (Gnu Debugger) and OpenOCD for microcontroller debugging - from the terminal?
Looking at mbed CLI flashing the chip, I get confused. What is happening on the background? Is mbed CLI secretly using openocd to connect to the chip? Or perhaps pyOCD? Or some other way?
mbed-cli is open source, you can find the repository here. If you search for "def compile_" you'll find the specific code for what is happening when you run mbed compile.
mbed-cli uses mbed-ls to detect your board and htrun to flash it. htrun has a variety of plugins for copying to different boards, including pyocd but in the most basic case it just copies to the disk detected with mbed-ls.
I have not tried all of them but the first and certainly the mbed supported nucleo boards show up as a virtual thumb drive, and you simply copy the .bin file over, no real magic to it from the host side no other software required other than what the operating system already has with respect to mounting usb flash drives. There is a debug header on these boards, and even if not that there is for the ones I know an mcu that manages the debug part I call that the debug mcu, then there is the mcu under test or the demonstration one that you bought the board to play with. The mbed ones have generally been arm and there is an swd (jtag-ish) interface, the debug mcu very likely uses that interface.
openocd is just one tool that knows the swd protocol, that doesnt mean that they have to run openocd on the mcu. you can write your own software to bit bang or talk to an ftdi chip to use mpsse or other solution to generate the swd protocol transitions on that bus.
Simplest case the firmware for the specific nucleo board only has to know that one stm32 it is programming, doesnt have to know more than that, but one swd is somewhat generic and may make sense to have a more generic debug mcu firmware.
Now these NUCLEO and other STM32 debug mcus also speak stlink which is separate from the firmware looks like a thumb drive deal. Stlink a protocol that a host can use to ask the debug mcu to do stuff, just like mpsse is a protocol/instruction set that you use to ask some ftdi parts to do stuff for you (a bit different but in concept speak one protocol to a proxy agent that does something for you).
This mbed cli could possibly just be copying the file over for you which you could have just done yourself. Or maybe it is speaking some other protocol The first mbeds were based on NXP parts not ST and thus dont have the stlink protocol on the front end. They had/have the just copy the binary which I remember seeing someones blog have so maybe they hired that person or borrowed that open source project.
While the mbed sandbox may be great I recommend you try out the other options, first mbed to build the binary, then copy it over, mbed to build it and maybe openocd through stlink to write it to flash. ST and NXP parts have traditionally had a bootloader that would support a uart protocol you can try that, as that is something you would very likely use, or swd, to get into a chip on a board if you were working on some product built around or that used chips like these but was not some hobby/eval board like the nucleos. I also recommend trying baremetal without the libraries, just read the manual, I find that easier than the libraries, YMMV, also ST has at least one set of its own libraries I think they are in a transition between to software solutions, perhaps try both or try the new one as the other will lose support.
You can also get the SWD spec and there are github and other open projects that can help, take your nucleo board and develop a program on one mcu to talk to another (mcus have gpio making it an easy way to bit bang, you can bit bang an ftdi part or do other things dont have to use an mcu) and try to learn/understand that protocol itself. It is used by all the cortex-ms thus far.
There is also a usb protocol like stlink that is being pushed by arm, the newer MSP432 launchpads use it or support it. The stlink protocol itself for that matter.
Anyway I digress the nucleo through the (debug) usb has the stlink protocol and has the I am a thumb drive thing, so mbed tools are likely using one of those probably the latter since stlink is likely not found on non-st products. Very likely that the debug mcu is using swd to program the development/demonstration mcu, dont know how else it would be doing it.
I have a pc with nvidia gt520 2 gb ddr2 graphics card and a Intel core 2 duo processor with 6 gb ram
While playing assassin creed syndicate
In graphics setting it only need 1850 mb of VRAM although it is very lagging. does it is due to CPU?
I want to know that how high end game use CPU vs GPU
I have a Asus laptop and it is very slow. It hangs a lot. It is unable to run more than one application at a time. It's specifications are :
Processor : AMD C-60 APU with Radeon(tm) HD Graphics 1.00 GHz
Installed Memory(RAM) : 2.00 GB (1.60 GB usable)
System type : 64-bit Operating System
It's model is X53U and is manufactured in 2012-04.
I want to know that if I extend the RAM will it help? or Could you provide any other suggestion on what is wrong and what can be done to run the applications smoothly?
Thanks.
Extends the RAM will indeed speed up your computer a little bit and will allow you to launch more applications. 2GB is really a small amount of memory.
However, do not expect to have a beast after your upgrade : Your CPU is quite slow, and can not be changed on a laptop.
If you haven't clean your laptop since 2012-04, dust can also slow down your computer.