Unreal Engine 4.8 fatal error at startup - game-engine

I want to start making games with Unreal Engine 4. So, I downloaded a pre-compiled version of UE4 from a reliable third-party website, which is worked for my friend.
For running UE4, I start UE4editor.exe from C:\UE4\Engine\Binaries\Win64 folder.
When I run it, the following error message shows, the progress hangs on 0% and quits.
Also, notice the address mentioned in error message: D:\Unreal Engine 4.8.0 NV-Techs... which is not correct! As I said before, my UE4 folder is C:\UE4. Whats wrong?!!
What can I do to solve the problem?!!
I attached log and dump files. If any other details needed please tell me.
Thanks for your attention
system specs:
Windows 8.1
CPU: Core i5
RAM: 4GB
GPU: AMD Radeon 6300M series
Log files:
UE4 logs.rar
error message:
---------------------------
The UE4- Editor has crashed and will close
---------------------------
Fatal error: [File:D:\Unreal Engine 4.8.0 NV-Techs\Engine\Source\Runtime\Windows\D3D11RHI\Private\D3D11Util.cpp] [Line: 223]
Direct3DDevice->CreateTexture2D(TextureDesc,SubResourceData,OutTexture2D) failed
at D:\Unreal Engine 4.8.0 NV-Techs\Engine\Source\Runtime\Windows\D3D11RHI\Private\D3D11Texture.cpp:458
with error E_INVALIDARG,
Size=512x512x1 Format=(0x00000035), NumMips=1, Flags=D3D11_BIND_DEPTH_STENCIL D3D11_BIND_SHADER_RESOURCE
KERNELBASE.dll {0x00007ffce1d58b9c} + 0 bytes
UE4Editor-Core.dll {0x00007ffccc08087f} + 0 bytes
UE4Editor-Core.dll {0x00007ffccbec9dd8} + 0 bytes
UE4Editor-Core.dll {0x00007ffccbeaa7a2} + 0 bytes
UE4Editor-D3D11RHI.dll {0x00007ffcc26f4fd4} + 0 bytes
UE4Editor-D3D11RHI.dll {0x00007ffcc26efe51} + 0 bytes
UE4Editor-D3D11RHI.dll {0x00007ffcc269300f} + 0 bytes
UE4Editor-D3D11RHI.dll {0x00007ffcc26df7bc} + 0 bytes
UE4Editor-D3D11RHI.dll {0x00007ffcc26fb586} + 0 bytes
GFSDK_VXGI_x64.dll!VXGI::VoxelTexture::AllocateResources() {0x00007ffcc1f0bbe4} + 18 bytes [c:\p4\sw\devrel\libdev\gi\dev\bugfix_main\giworks\src\gi_voxe
GFSDK_VXGI_x64.dll!VXGI::GlobalIllumination::AllocateResources() {0x00007ffcc1edc2da} + 8 bytes [c:\p4\sw\devrel\libdev\gi\dev\bugfix_main\giworks\src\gi_base
GFSDK_VXGI_x64.dll!VXGI::GlobalIllumination::setVoxelizationParameters() {0x00007ffcc1ed92f1} + 11 bytes [c:\p4\sw\devrel\libdev\gi\dev\bugfix_main\giworks\src\gi_base
UE4Editor-D3D11RHI.dll {0x00007ffcc26c5f19} + 0 bytes
UE4Editor-D3D11RHI.dll {0x00007ffcc26d4260} + 0 bytes
UE4Editor-RHI.dll {0x00007ffcddf3f819} + 0 bytes
UE4Editor.exe {0x00007ff71a1dacee} + 0 bytes
UE4Editor.exe {0x00007ff71a1d241b} + 0 bytes
UE4Editor.exe {0x00007ff71a1d2a6a} + 0 bytes
UE4Editor.exe {0x00007ff71a1e45e9} + 0 bytes
UE4Editor.exe {0x00007ff71a1e55d9} + 0 bytes
KERNEL32.DLL {0x00007ffce31913d2} + 0 bytes
ntdll.dll {0x00007ffce4b35454} + 0 bytes
ntdll.dll {0x00007ffce4b35454} + 0 bytes

I'd suggest just getting an official release off the official github page.
http://github.com/epicgames
You need to register your github account with them. You can follow the instructions here:
https://github.com/EpicGames/Signup
when that is all done, you can get a official 4.8 release by downloading the zip at :
https://github.com/epicgames/unrealengine/tree/4.8
You can then follow the instructions to install a from-source version of UE4.
Otherwise, I'm pretty sure you can get 4.8 from the Epic Game Launcher that most devs are taught to start with.
Hopefully this helps :)

Related

Chromium build not running after rebranding

I wanted to try to build and modify Chromium in order to try and learn a few things. When I first built it and ran it (without any changes), it worked perfectly fine, however I tried changes some of the branding in order to tinker with it and it gave me an error message. I referred to two places when rebranding to see where I could change the branding:
First link
Second link
When I ran the modified version, it gave me an error message when running:
[0808/205916.858263:FATAL:bundle_locations.mm(62)] Check failed: new_bundle. Failed to load the bundle at /path/to/src/out/buildTwo/RebrandName.app/Contents/Frameworks/Chromium Framework.framework/Versions/94.0.4601.0
0 libbase.dylib 0x000000010a123638 base::debug::CollectStackTrace(void**, unsigned long) + 12
1 libbase.dylib 0x000000010a00e774 base::debug::StackTrace::StackTrace() + 24
2 libbase.dylib 0x000000010a032c1c logging::LogMessage::~LogMessage() + 184
3 libbase.dylib 0x000000010a033914 logging::LogMessage::~LogMessage() + 12
4 libbase.dylib 0x0000000109ffa0f0 logging::CheckError::~CheckError() + 36
5 libbase.dylib 0x000000010a139d5c base::mac::AssignOverridePath(base::FilePath const&, NSBundle**) + 176
6 libchrome_dll.dylib 0x0000000104ceb5d0 SetUpBundleOverrides() + 40
7 libchrome_dll.dylib 0x0000000104ce8f44 ChromeMain + 156
8 RebrandName 0x00000001049b7cc0 main + 284
9 libdyld.dylib 0x000000018efcd450 start + 4
zsh: trace trap out/buildTwo/RebrandName.app/Contents/MacOS/RebrandName
I tried to investigate to see what might have happened and saw that /path/to/src/out/buildTwo/RebrandName.app/Contents/Frameworks/Chromium Framework.framework but in it's place is /path/to/src/out/buildTwo/RebrandName.app/Contents/Frameworks/RebrandName Framework.framework
I ran out/buildTwo/RebrandName.app/Contents/MacOS/RebrandName and this error popped up.
This is currently running on an M1 Macbook Pro on MacOS Big Sur.
I have tried searching for the fix with no luck. Please understand that I have little to no idea of what I'm doing, so if more clarification is required, please let me know and I will try to edit my question as best as I can.
The file:
chrome/common/chrome_constants.cc
Change PRODUCT_STRING constant

Problem getting a process' writing/reading bytes and cpu usage in Python 3.8

recently in my school finals project i needed to write a function that return a process' writing and reading bytes as well as it's cpu usage. I have tried using Psutil, tried using WMI (couldn't find a way to recieve the writing and reading bytes through that) and even tried combining them. At the end though i get an access denied error every time on almost all processes by the Psutil. Would love some help in solving this problem.
Edit: using Windows 10
this is the code i currently have:
import wmi
import psutil
from elevate import elevate
elevate()
c = wmi.WMI()
process_watcher = c.Win32_Process.watch_for("creation")
while True:
new_process = process_watcher()
pid = new_process.Processid
name = new_process.Caption
try:
curr_process = psutil.Process(pid=pid)
try:
# get the number of CPU cores that can execute this process
cores = len(curr_process.cpu_affinity())
except psutil.AccessDenied:
cores = 0
# get the CPU usage percentage
cpu_usage = curr_process.cpu_percent()
try:
# get the memory usage in bytes
memory_usage = curr_process.memory_full_info().uss
except psutil.AccessDenied:
memory_usage = 0
# total process read and written bytes
io_counters = curr_process.io_counters()
read_bytes = io_counters.read_bytes
write_bytes = io_counters.write_bytes
print(pid + ": " + name + ", Write bytes: " + write_bytes + ", cpu usage: "+ cpu_usage)
except:
print("Access Denied")

ZFS: Unable to expand pool after increasing disk size in vmware

I have a Centos7 VM with ZFS on linux installed.
The VM has a disk /dev/sdb, that I've added to a pool named 'backup', and in this pool created a dataset.
Now, I wanted to increase the size of the disk in VMware, and then expand the size of the pool, but I'm not getting this to work.
I've tried 'zpool online -e backup sdb', but nothing changes.
I've tried running 'partprobe /dev/sdb' before and after the live above, but nothing changes.
I've tried rebooting + the above, nothing changes.
I've tried "parted /dev/sdb",resizing the partition (it suggests the actual new size of the volume), and then all of the above. But nothing changes
I've tried 'zpool export backup' + 'zpool import backup' in various combinations with all of the above. No luck
And also: 'lsblk' and 'df -h' reports the old/wrong size of /dev/sdb, even if parted seems to understand that it has been increased.
PS: autoexpand=on
What to do?
I faced a similar issue today and had to try a lot before finding the solution.
When I tried the known solutions (using zpool) of setting autoexpand as on and also restarting the partprobe, system would not auto expand (even after a restart).
Finally, I could solve it using parted instead of getting into zpool at all.
We need to be careful here since wrong partition selections can cause data loss.
What worked for me in your situation
Step 1: Find which pool you are trying to expand. In my case, it is 5 as seen below (unallocated space is after this pool). Use parted -l
parted -l
Output
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 69.8GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 540MB 538MB fat32 EFI System Partition boot, esp
3 540MB 2009MB 1469MB swap
4 2009MB 3592MB 1583MB zfs
5 3592MB 32.2GB 28.6GB zfs
Step 2: Instructing explictly to expany pool number 5 to 100% available. Note that '5' is not static. You need to use the pool id you wish to expand. Double-check this. Use parted /dev/XXX resizepart YY 100%
parted /dev/sda resizepart 5 100%
After this, I was able to use the entire space in VM.
For reference:
LSBSK Before
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 29.5G 0 part
LSBSK After
sda 8:0 0 65G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 513M 0 part /boot/grub
│ /boot/efi
├─sda3 8:3 0 1.4G 0 part
│ └─cryptoswap 253:1 0 1.4G 0 crypt [SWAP]
├─sda4 8:4 0 1.5G 0 part
└─sda5 8:5 0 61.7G 0 part

valgrind - total heap usage: 0 allocs, 0 frees, 0 bytes allocated

I run valgrind on binary always show as bellow even I have allocated memory using malloc.
==13775== HEAP SUMMARY:
==13775== in use at exit: 0 bytes in 0 blocks
==13775== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==13775==
==13775== All heap blocks were freed -- no leaks are possible
Please let me know solution if some faced this problem previously.
Usually, valgrind not seeing any malloc/free calls is due to one of the
following reasons:
1 the program is linked statically
2 the program is linked dynamically, but malloc/free library is static
3 malloc/free lib is dynamic, but it is a 'non standard' library (for example tcmalloc)
As ldd shows that you have some dynamic libraries, it is not reason 1.
So, it might be reason 2 or reason 3.
For both 2 and 3, you can make it work by using the option
--soname-synonyms=somalloc=....
See user manual for more details

Size of ELF file vs size in RAM

I have an STM32 onto which I load ELF files in RAM (using OpenOCD and JTAG). So far, I haven't really been paying attention to the size of the ELF files that I load.
Normally, when I compile an ELF file that is too large for my board (my board has 128KB of RAM onto which the executable can be loaded) the linker complains (in the linker script I specify the size of the RAM).
Now that I notice the size of the outputted ELF file, I see that it is 261KB, and yet the linker has not complained!
Why is my ELF file so large, but my linker is fine with it? Is the ELF file on the host loaded exactly on the board?
No -- ELF contains things like relocation records that don't get loaded. It can also contain debug information (typically in DWARF format) that only gets loaded by a debugger.
You might want to use readelf to give you an idea of what one of your ELF files actually contains. You probably don't want to do it all the time, but doing it at least a few times to get some idea of what's there can give a much better idea of what you're dealing with.
readelf is part of the binutils package; chances are pretty decent you already have a copy that came with your other development tools.
If you want to get into even more detail, Googling for something like "ELF Format" should turn up lots of articles. Be aware, however, that ELF is a decidedly non-trivial format. If you decide you want to understand all the details, it'll take quite a bit of time and effort.
using the utility arm-none-eabi-size you can get a better picture of what actually gets used on the chip. The -A option will breakdown the size by section.
The relevant sections to look at when it comes to RAM are .data, .bss (static ram usage) and .heap (the heap: dynamic memory allocation by your program).
Roughly speaking, as long as the static ram size is below the RAM number from the datasheet, you should be able to run something on the chip and the linker shouldn't complain - your heap usage will then depends on your program.
Note: .text would be what needs to fit in the flash (the code).
example:
arm-none-eabi-size -A your-elf-file.elf
Sample output:
section size addr
.mstack 2048 536870912
.pstack 2304 536872960
.nocache 32 805322752
.eth 0 805322784
.vectors 672 134217728
.xtors 68 134610944
.text 162416 134611072
.rodata 23140 134773488
.ARM.exidx 8 134796628
.data 8380 603979776
.bss 101780 603988160
.ram0_init 0 604089940
.ram0 0 604089940
.ram1_init 0 805306368
.ram1 0 805306368
.ram2_init 0 805322784
.ram2 0 805322784
.ram3_init 0 805339136
.ram3 0 805339136
.ram4_init 0 939524096
.ram4 0 939524096
.ram5_init 0 536875264
.ram5 0 536875264
.ram6_init 0 0
.ram6 0 0
.ram7_init 0 947912704
.ram7 0 947912704
.heap 319916 604089940
.ARM.attributes 51 0
.comment 77 0
.debug_line 407954 0
.debug_info 3121944 0
.debug_abbrev 160701 0
.debug_aranges 14272 0
.debug_str 928595 0
.debug_loc 493671 0
.debug_ranges 146776 0
.debug_frame 51896 0
Total 5946701