Can I run unreal engine 5 on my laptop? Below are the specifications - game-engine

My laptop specifications:
Navida GeForce 840m graphic card
12Gb ram
intel i5 processor
4.15.6 inch screen

You will be able to "Run" the editor, edit Blueprint Graphs and create simple levels without complex shaders, materials and with low amount of assets
but it will be really hard to do anything more espessially, if you wanna dig into C++ with Visual Studio
Just download UE5 via epic launcher and check it.
Remember to install Unreal on SSD if you have one. Same deal goes with projects.

Hmm, I think that you can, but expect it to be very slow, especially when the shaders are compiling.

Related

C++ ide to run on an embedded linux device

I am putting together a very simple device for developing C++ applications on the go. Something like a a psp, size wise, with a small keyboard to write code and LCD screen for the output. This device has also a DVI out, so I can output to screen too.
Now, I am looking for a very lightweight C++ IDE for X11 (the device runs on a Linux variant of OpenEmbedded); something that is able to work decently on both the DVI monitor but mainly on the small lcd screen (it is a 4.3 inch screen with a 480x272 resolution).
I am aware that anything under 1024x768 is just pure joy for pain lovers, but the device will be used only in particular situations; when a laptop/netbook is not feasible.
While using the DVI out, I have no problems running Gnome, the screen is big enough to allow me to work comfortably. the problem is that it won't scale very well on the 480x272 screen.
The first idea was to use nano and g++, without even loading X11, but I need to have something with code completion and a minimal UI to do the standard operations (build, run, debugging step by step, breakpoints), if possible. The memory is not that big (256 Mb), so the smaller the IDE, the better it is.
What would you suggest, other than the approach via text editor and g++ ? I am new to the embedded linux world, I use Eclipse on Ubuntu, but that one is incredibly huge and on such small device would just kill it. On Windows I would use Dev-C++, while on Mac I use either code:blocks or Xcode, but I don't really need all these features for the device.
Thanks in advance
I had the same problem, developing on an ODroid (http://hardkernel.com) running Linux on an ARM processor. For a while I used Netbeans, but it was too heavy, so I finally gave up and wrote my own open source IDE for the purpose.
https://github.com/amirgeva/coide

Kinect hangs up suddenly after working pretty well a few seconds. How can I fix it?

I tried using "Kinect for Windows" on my Mac. Environment set-up seems to have gone well, but something seems being wrong. When I start some samples such as
OpenNI-Bin-Dev-MacOSX-v1.5.4.0/Samples/Bin/x64-Release/Sample-NiSimpleViewer
or others, the sample application start and seems working quite well at the beginning but after a few seconds (10 to 20 seconds), the move seen in screen of the application halts and never work again. It seems that the application get to be unable to fetch data from Kinect from certain point where some seconds passed.
I don't know whether the libraries or their dependency, or Kinect's hardware itself is going wrong (as for hardware, invisibly broken or something), and I really want to know how to detect which is it.
Could anybody tell me how can I fix the issue please?
My environment is shown below:
Mac OS X v10.7.4 (MacBook Air, core i5 1.6Ghz, 4GB of memory)
Xcode 4.4.1
Kinect for Windows
OpenNI-Bin-Dev-MacOSX-v1.5.4.0
Sensor-Bin-MacOSX-v5.1.2.1
I followed instruction here about libusb: http://openkinect.org/wiki/Getting_Started#Homebrew
and when I try using libfreenect(I know it's separate from OpenNI+SensorKinect), its sample applications say "Number of devices found: 0", which makes no sense to me since I certainly connected my Kinect to MBA...)
Unless you're booting to Windows forget about Kinect for Windows.
Regarding libfreenect and OpenNI in most cases you'll use one or the other, so think of what functionalities you need.
If it's basic RGB+Depth image (and possibly motor and accelerometer ) access libfreenect is your choice.
If you need RGB+Depth image and skeleton tracking and (hand) gestures (but no motor, accelerometer access) use OpenNI. Note that if you use the unstable(dev) versions, you should use Avin's SensorKinect Driver.
Easiest thing to do a nice clean install of OpenNI.
Also, if it helps, you can a creative coding framework like Processing or OpenFrameworks.
For Processing I recommend SimpleOpenNI
For OpenFrameworks you can use ofxKinect which ties to libfreenect or ofxOpenNI. Download the OpenFrameworks packaged on the FutureTheatre Kinect Workshop wiki as it includes both addons and some really nice examples.
When you are connecting the Kinect device to the machine, have you provided external power to it? The device will appear connected to a computer by USB only power but will not be able to tranfer data as it needs the external power supply.
Also what Kinect sensor are you using? If it is a new Kinect device (designed for Windows) they may have a different device signature which may cause the OpenNI drivers to play-up. I'm not a 100% on this one, but I've only ever tried OpenNI with an XBox 360 sensor.

When profiling, most of the time is spent in nvoglv64.dll. What should I deduce?

I am profiling a C++ application with Intel VTune Amplifier. Most of the time seems to be spent in nvoglv64.dll more precisely in DrvPresentBuffers and/or KeSynchoronizeExecution. Note that I have a NVIDA GeoForce graphic card.
I am new to the application I am profiling and looking for bottleneck and low hanging fruits of optimization. Since most of the time seems to be spent in this NVIDIA dll, I do not know how decode the profiling results.
I would like to know where are those call from my application side in order to build a knowledge of my application. Can someone give me some hint to start :
When exactly do an application call DrvPresentBuffers, what kind of call should I look at (on my application side)
Where can I get more info about how to profile, understand and optimize applications where bottlenecks are in the graphic card dll's.
DrvPresentBuffers is part of the draw code for openGL. That nvoglv64.dll is the 64bit openGL driver for your nVidia card. There is a known performance issue for 64bit Windows 7 and this function on many drivers. I couldn't find a link but you can search the nVidia forum if you are experiencing problems. If there is nothing wrong or nothing going horribly slow then I'm not sure optimization is where I would start when familiarizing myself with a new application.

Is it possible to do GPU programming if I have an integrated graphics card?

I have an HP Pavilion Laptop, it's so-called graphics card is some sort of integrated NVIDIA driver running on shared memory. To give you an idea of its capabilities, if a videogame was made in the last 5 years at a cost of more than a couple million dollars, it just won't be playable on my computer.
Anyways, I was wondering if I could do GPU programming, like CUDA, on this thing. I don't expect it to be fast, I'd just like to get the experience and not make my laptop catch fire in the meanwhile.
Find out what GPU your laptop is, and compare it against this list: http://en.wikipedia.org/wiki/CUDA#Supported_GPUs. Most likely, CUDA will not be supported.
This doesn't necessarily prevent you from doing "GPU programming", however. If the GPU supports fragment and vertex shaders, you can use the fixed pipeline to send data to the card (for example, through texture data) and do your processing in a fragment shader. You will then do a read from the pixel buffer to get the data back into system memory. Though hackish, this approach was quite popular until CUDA and other frameworks like OpenCL were introduced.

embedded application

In the last two months I've worked as a simple application using a computer vision library(OpenCV).
I wish to run that application directly from the webcam without the need of an OS. I'm curious to know if that my application can be burned into a chip in order to not have the OS to run it.
Ofcorse the process can be expensive, but I'm just curious. Do you have any links about that?
ps: the application is written in C.
I'd use something bigger than a PIC, for example a small 32 bit ARM processor.
Yes. It is theoretically possible to port your app to PIC chips.
But...
There are C compilers for the PIC chip, however, due to the limitations of a microcontroller, you might find that the compiler, and the microcontroller itself is far too limited for computer vision work, especially if your initial implementation of the app was done on a full-blown PC:
You'll only have integer math available to you, in most cases, if not all (can't quote me on that, but our devs at work don't have floating point math for their PIC apps and it causes many foul words to emanate from their cubes). Either that, or you'll need to hook to an external math coprocessor.
You'll have to figure out how to get the PIC chip to talk USB to the camera. I know this is possible, but it will require additional hardware, and R&D time.
If you need strict timing control,
you might even have to program the
app in assembler.
You'd have to port portions of OpenCV to the PIC chip, if it hasn't been already. My guess is not.
If your'e not already familiar with microcontroller programming, you'll need some time to get up to speed on the differences between desktop PC programming and microcontroller programming, and you'll have to gain some experience in that. This may not be an issue for you.
Basically, it would probably be best to re-write the whole program from scratch given a PIC chip constraint. Good thing is though, you've done a lot of design work already. It would mainly be hardware/porting work.
OR...
You could try using a small embedded x86 single-board PC, perhaps in the PC/104 form factor, with your OS/app on a CF card. It's a real bone fide PC, you just add your software. Good thing is, you probably wouldn't have to re-write your app, unless it had ridiculous memory footprint. Embedded PC vendors are starting to ship boards based on 1 GHz Intel Atoms, and if you needed more help you could perhaps hook a daughterboard onto the PC-104 bus. You'll work around all of the limitations listed above, as your using an equivalent platform to the PC you developed your app on. And it has USB ports! If you do a thorough cost analysis and if your'e cool with a larger form factor, you might find it to be cheaper/quicker to use a system based on a SBC than rolling a solution using PIC chips/microcontrollers.
A quick search of PC-104 on Google would reveal many vendors of SBCs.
OR...
And this would be really cheap - just get a off-the-shelf cheap Netbook, overwrite the OEM OS, and run the code on there. Hackish, but cheap, and really easy - your hardware issues would be resolved within a week.
Just some ideas.
I think you'll find this might grow into pretty large project.
It's obviously possible to implement a stand-alone hardware solution to do something like this. Off the top of my head, Rabbit's solutions might get you to the finish-line faster. But you might be able to find some home-grown Beagle Board or Gumstix projects as well.
Two Google links I wanted to emphasize:
Rabbit: "Camera Interface Application Kit"
Gumstix: "Connecting a CMOS camera to a Gumstix Connex motherboard"
I would second Nate's recommendation to take a look at Rabbit's core modules.
Also, GHIElectronics has a product called the Embedded Master that runs .Net MicroFramework and has USB host/device capabilities built-in as well as a rich library that is a subset of the .Net framework. It runs on an Arm processor and is fairly inexpensive (> $85). Though not nearly as cheap as a single PIC chip it does come with a lot of glue logic pre-built onto the module.
CMUCam
I think you should have a look at the CMUcam project, which offers affordable hardware and an image processing library which runs on their hardware.