I'm not sure if this is the right place to ask ... please tell me if there are better places for this question.
When writing Direct3d 11 programs, I observed the following problem:
I have two GPUs on my laptop: intel and nvidia (GT640M). Usually the nvidia card is not default (i.e. not the first one enumerated by IDXGIFactory::EnumAdapters).
In that case, I can still creat a D3D11Device using D3D11CreateDevice or D3D11CreateDeviceAndSwapChain, and set the pAdapter to the adapter of the nvidia card. The program runs correctly.
However, when the program terminates, the nvidia card is still doing SOMETHING... I have no idea what it is doing, but it obviously is: the icon is colorful, and the temperature of my laptop raises quickly.
If I set nvidia card as the default for this particular program (which I did in the control panel of nvidia), the problem disappears.
After many experiments, I have drawn the conclusion that the problem occurs if and only if I want to create device with a non-default adapter.
Is this an issue of the nvidia card, or of Direct3D? Is there a solution to it?
Related
When playing certain games or viewing certain websites, my computer will suddenly crash and my monitor will display "HDMI no signal" the computer cannot be restarted without unplugging it from the wall. Upon viewing the crash report I see event 10016 related to permissions I think, but I'm a moron. Any and all solutions are greatly appreciated. Relevant components are as follows:
Graphics Card: RTX 2080
Power supply: EVGA supernova 1000g2
Storage: Sandisk 500Gb
CPU: Ryzen 2700X
Monitor: Both HP EliteDisplay E222 and another HP monitor
Since you are not supplying your q with the crash report, I can only suspect your problem is rooted to either one of these:
Bug in the accompanying display driver and/or directX installation
Proposed solution : try and obtain the latest version of your RTX 2080, do a 2D and 3D test run afterwards to ensure everythings proper
Fan or cooling related issue. Some games might force your hardwares to work harder, especially over continuous use. Check your fan and coolings to ensure they are moving and cooling as fast as they should. Also install a temp monitoring software if you need to be extra sure.
Hope those help m8
I have some dated equipment used to run an experimental apparatus. Unfortunately, that equipment will only run on WinXP using FireWire/IEEE1394, which is becoming more and more of a pain for us to maintain hardware-wise. Unfortunately we also don't have the money to replace this equipment. We discussed perhaps trying to virtualize the XP environment on a newer OS. I'd been reading about VFIO/IOMMU and figured maybe I could pass the FireWire PCI cards through and just do it that way.
Plus side - I got it to work. I installed XP with a QEMU-KVM hypervisor. Got it set up, passed the firewire cards through, and all was recognized in the VM, including when I attached the equipment to the FW cards. XP device manager saw that it was all there.
Unfortunately, I've found that the actual interaction with the hardware seems to be touchy at best. Things misbehave in weird, unexplainable ways. Some of those made me think that the guest OS wasn't communicating with the passed through cards correctly. This was surprising as I was under the impression that passed through cards were utilized directly by the guest OS without host OS intervention.
My question is basically - if I'm virtualizing an older system and passing through the various ports/cards needed, should it behave as if it were bare metal? Or are there circumstances where what the guest OS tries to do is not the same as if it were bare metal (I.E - the host OS changes something when the instruction leaves the VM)? As I said - I was under the impression that the guest OS was interacting with hardware directly, but experience has made me question if this is actually the case.
Part of the reason I want to know this is that there's other equipment that would be more dangerous or lead to damage of the equipment if it behaved unexpectedly (I.E Lasers where power is computer controlled) that uses other hardware interfaces. So, if there is a risk of what the guest OS thinks it's doing being disconnected from the actual reality, that's a safety risk that I want to understand before going forward.
Can anyone explain or point to an explanation (or at least to some clues) of how rendering in multi-gpu/multi-monitors setup work?
For example I got 5 NVIDIA Quadro 4000 video cards installed and 9 displays connected to them. The displays are not grouped whatsoever. Just arranged in Windows 7 that the total resolution is 4098x2304. The cards are not connected with SLI either.
I got a Flash app which sees a 4096x2304 window as a single Stage3d context (using dx9) and can work with this quite unusual setup as though it was just a huge display with only one video card.
How does the rendering work internally? What video cards are actually doing? Do they share resources? Who renders all the stuff? Why do I get 29.9 fps doing mostly nothing in the app?
Thank you.
I don't know for DX, but for OpenGL I've collected this information here: http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html
In short, on Windows with new nVidia drivers one GPU (typically the first) renders everything and the others get the content blitted. If you enable SLI Mosaic Mode, the GL commands are sent to all GPUs, giving you scalability for the fill rate.
I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.
I'm looking for a way to determine how to know whether an application is using the GPU with Objective-C. I want to be able to determine if any applications currently running on the system have work going on on the GPU (ie: a reason why the latest MacBook Pros would switch to the discrete graphics over the Intel HD graphics).
I've tried getting the information by crossing the list of active windows with the list of windows that have their backing location stored in video memory using Quartz Window Services, but all that does is return the Dock application and I have other applications open that I know are using the GPU (Photoshop CS5, Interface Builder), that and the Dock doesn't require the 330m.
The source code of this utility gfxCardStatus might help....