I am trying to build images for dev and runtime docker containers based on custom unreal engine (right now engine is a full copy of original 5.1 ue)
I am using information from official documentation from this page
https://docs.unrealengine.com/4.27/en-US/SharingAndReleasing/Containers/HowTo/BuildingTheLinuxContainerImagesFromSource/
Steps to rep:
1)Clone unreal engine repo (git clone -b release --single-branch git#github.com:EpicGames/UnrealEngine.git)
2)Run Setup.sh
3)Run GenerateProjectFiles.sh
4)run ./build branchname repourl
Expected result:
Images builded successfully
Actual result:
Engine building fails with unknown error (full log with detailed information here
https://drive.google.com/drive/folders/1kkGCqApdnQBLGYLy7yeHydwb-rTByEX3?usp=share_link)
Extra information:
I have the same error , when try to build 5.1 from epic games repo , but building 5.0 version is okay , after 4 hours of waiting process completed and all images are created(log with it also can be found by the link
https://drive.google.com/drive/folders/1kkGCqApdnQBLGYLy7yeHydwb-rTByEX3?usp=share_link)
Part of the log with the warning after which engine building process stops:
#22 14630.5 LogCore: Warning: dlopen failed: libxkbcommon.so.0: cannot open shared object file: No such file or directory
#22 14630.5 LogShaderCompilers: Display: ================================================
#22 14630.5 LogShaderCompilers: Display: === FShaderJobCache stats ===
#22 14630.5 LogShaderCompilers: Display: Total job queries 0, among them cache hits 0 (0.00%)
#22 14630.5 LogShaderCompilers: Display: Tracking 0 distinct input hashes that result in 0 distinct outputs (0.00%)
#22 14630.5 LogShaderCompilers: Display: RAM used: 0.00 MB (0.00 GB) of 1587.20 MB (1.55 GB) budget. Usage: 0.00%
#22 14630.5 LogShaderCompilers: Display: === Shader Compilation stats ===
#22 14630.5 LogShaderCompilers: Display: Shaders Compiled: 0
#22 14630.5 LogShaderCompilers: Display: Jobs assigned 0, completed 0 (0.00%)
#22 14630.6 LogShaderCompilers: Display: Time at least one job was in flight (either pending or executed): 0.00 s
#22 14630.6 LogShaderCompilers: Display: ================================================
#22 14630.6 LogShaderCompilers: Display: Shaders left to compile 0
#22 14630.6 LogDerivedDataCache: Display:
../../../Engine/DerivedDataCache/Compressed.ddp.14B7DC4256834265883
9EAC5FDCACDCA: Opened pak cache for reading. (0 MiB)
#22 14630.6 LogDerivedDataCache: Display: ../../../Engine/DerivedDataCache/Compressed.ddp: Opened pak cache for writing.
#22 14630.6 LogDerivedDataCache: Display: Sucessfully wrote ../../../Engine/DerivedDataCache/Compressed.ddp.
#22 14630.6 libc++abi: terminating with uncaught exception of type std::runtime_error: COULDN'T load dxcompiler.
#22 14630.6 Signal 6 caught
Related
My source video file (1h 30min movie) is playable in both PotPlayer and VLC: h264, 8-bit color and 7755kb/s bitrate.
The NVEnc command I'm using is this:
.\nvencc\NVEncC64.exe --avhw -i "input.mkv" --codec hevc --preset quality --bframes 4 --ref 7 --cu-max 32 --cu-min 8 --output-depth 10 --audio-copy --sub-copy -o "output.mkv"
Encoding works fine (I believe):
NVEncC (x64) 5.26 (r1786) by rigaya, Jan 31 2021 09:23:04 (VC 1928/Win/avx2)
OS Version Windows 10 x64 (19042)
CPU AMD Ryzen 5 1600 Six-Core Processor [3.79GHz] (6C/12T)
GPU #0: GeForce GTX 1660 (1408 cores, 1830 MHz)[PCIe3x16][457.51]
NVENC / CUDA NVENC API 11.0, CUDA 11.1, schedule mode: auto
Input Buffers CUDA, 21 frames
Input Info avcuvid: H.264/AVC, 1920x800, 24000/1001 fps
AVSync vfr
Vpp Filters cspconv(nv12 -> p010)
Output Info H.265/HEVC main10 # Level auto
1920x800p 1:1 23.976fps (24000/1001fps)
avwriter: hevc, eac3, subtitle#1 => matroska
Encoder Preset quality
Rate Control CQP I:20 P:23 B:25
Lookahead off
GOP length 240 frames
B frames 4 frames [ref mode: disabled]
Ref frames 7 frames, MultiRef L0:auto L1:auto
AQ off
CU max / min 32 / 8
Others mv:auto
encoded 142592 frames, 219.97 fps, 1549.90 kbps, 1098.83 MB
encode time 0:10:48, CPU: 8.7%, GPU: 5.2%, VE: 98.3%, VD: 21.5%, GPUClock: 1966MHz, VEClock: 1816MHz
frame type IDR 595
frame type I 595, avgQP 20.00, total size 39.44 MB
frame type P 28519, avgQP 23.00, total size 471.93 MB
frame type B 113478, avgQP 25.00, total size 587.45 MB
but when I try to play it in either PotPlayer or VLC it says there is no video track or it just doesn't play at all.
MediaInfo also doesn't show any video, audio, or subtitle tracks either, just the name of the file and the file size. Am I missing something?
Switching --avhw to --avsw solved the problem.
I am running IP Webcam on Android which provides an mpjpeg video stream. I have to limit the capture frame rate to 5fps to save on battery.
However ffmpeg will still detect the input stream to be 25 fps, which causes it to be saved in the wrong speed causing timestamps and audio to be desynchronized.
Input #0, mpjpeg, from 'https://***:***#smarthome:8080/video':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 1:1 DAR 16:9], 25 tbr, 25 tbn, 25 tbc
Input #1, ogg, from 'https://***:***#smarthome:8080/audio.opus':
Duration: N/A, start: 0.006500, bitrate: N/A
Stream #1:0: Audio: opus, 48000 Hz, mono, fltp
Metadata:
ENCODER : Lavf58.12.100
[stream_segment,ssegment # 0x19b49a0] Opening '/mnt/nas/SecurityCamera/2020-06-20_14-26-04.mkv' for writing
Output #0, stream_segment,ssegment, to '/mnt/nas/SecurityCamera/%Y-%m-%d_%H-%M-%S.mkv':
Metadata:
encoder : Lavf58.20.100
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 25 tbr, 1k tbn, 25 tbc
Stream #0:1: Audio: opus, 48000 Hz, mono, fltp
Metadata:
ENCODER : Lavf58.12.100
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #1:0 -> #0:1 (copy)
Press [q] to stop, [?] for help
[ogg # 0x1753e00] Thread message queue blocking; consider raising the thread_queue_size option (current value: 8)
frame= 8 fps=7.9 q=-1.0 size=N/A time=00:00:00.49 bitrate=N/A speed=0.489x
As you can see it detects Input #0 to be 25 tbr, 25 tbn, 25 tbc which results in an expected 25 fps, however the fps (shown high right now but it slowly approaching 5fps) is way below 25 which causes the speed to be <1x.
I have tried to use -r 5 -i ... and -vsync 2 and different values for -enc_time_base none of which had any impact. From https://trac.ffmpeg.org/ticket/403 I've learned that -r only works on inputs with unknown fps. But my input doesn't have unknown fps, it has the wrong fps.
Is there any way to force overwrite the input fps so that I can get a proper speed of 1x and synchronized timestamps and audio?
I am using stm32mp157c-dk2 board and I added the RNDIS gadget to the config file. When I run u-boot on the board I get "No ethernet found". This is the log of the boot:
U-Boot SPL 2018.11-stm32mp-r2.1-00026-g161ca183f1-dirty (Jan 31 2020 - 12:34:38 +0200)
Model: STMicroelectronics STM32MP157C-DK2 Discovery Board
RAM: DDR3-1066/888 bin G 1x4Gb 533MHz v1.41
Trying to boot from MMC1
U-Boot 2018.11-stm32mp-r2.1-00026-g161ca183f1-dirty (Jan 31 2020 - 12:34:38 +0200)
CPU: STM32MP157CAC Rev.B
Model: STMicroelectronics STM32MP157C-DK2 Discovery Board
Board: stm32mp1 in basic mode (st,stm32mp157c-dk2)
Board: MB1272 Var2 Rev.C-01
DRAM: 512 MiB
Clocks:
- MPU : 650 MHz
- MCU : 208.878 MHz
- AXI : 266.500 MHz
- PER : 24 MHz
- DDR : 533 MHz
*******************************************
* WARNING 500mA power supply detected *
* Current too low, use a 3A power supply! *
*******************************************
NAND: 0 MiB
MMC: STM32 SDMMC2: 0, STM32 SDMMC2: 1
Loading Environment from EXT4... OK
In: serial
Out: serial
Err: serial
Net: No ethernet found.
Hit any key to stop autoboot: 0
Boot over mmc0!
Do you have any suggestions? thanks for helpers!
That big warning about too small of a power supply is the first thing to look at. A lack of power tends to lead to not all blocks of the SoC being used / available.
I resetup my BackupPc and running into a Problem:
I want to backup "/backup" on all hosts. I started with ONE host for test purposes.
Process:
BackupPC calls a Shell-Script on the Client
That script generates some snapshots and Mount them to /backup/...
Now BackupPC should backup
At least BackupPC calls another Shell-Script, wich unmounts and removes the snapshots
The /backup gets "backed up" but just the Folder, not their Contents.
I enhanced the first shell-script to make sure the Folders have Content, here the Output:
2017-06-04 20:11:14 Created directory /data/backuppc/pc/v3.lipperts-web.de/refCnt
2017-06-04 20:11:15 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 256,00 MiB.
2017-06-04 20:11:15 Output from DumpPreUserCmd: Logical volume "snaptshot-zabbix" created
2017-06-04 20:11:15 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 256,00 MiB.
2017-06-04 20:11:15 Output from DumpPreUserCmd: Logical volume "snaptshot-filebeat" created
2017-06-04 20:11:15 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 1,01 GiB.
2017-06-04 20:11:15 Output from DumpPreUserCmd: Logical volume "snaptshot-teamspeak" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: Logical volume "snaptshot-schnoddi" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: Logical volume "snaptshot-sentry" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: Reducing COW size 5,00 GiB down to maximum usable size 256,00 MiB.
2017-06-04 20:11:16 Output from DumpPreUserCmd: Logical volume "snaptshot-nginx" created
2017-06-04 20:11:16 Output from DumpPreUserCmd: insgesamt 13
2017-06-04 20:11:16 Output from DumpPreUserCmd: -rw-r--r-- 1 root root 329 Jun 3 17:43 docker-compose.yml
2017-06-04 20:11:16 Output from DumpPreUserCmd: drwx------ 2 root root 12288 Jun 3 17:43 lost+found
2017-06-04 20:11:16 full backup started for directory /backup
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-zabbix" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-filebeat" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-teamspeak" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-schnoddi" successfully removed
2017-06-04 20:11:17 Output from DumpPostUserCmd: Logical volume "snaptshot-sentry" successfully removed
2017-06-04 20:11:18 Output from DumpPostUserCmd: Logical volume "snaptshot-nginx" successfully removed
2017-06-04 20:11:18 Got fatal error during xfer (No files dumped for share /backup)
2017-06-04 20:11:23 Backup aborted (No files dumped for share /backup)
You can se there is an File listed "docker-compose.yml", but backup is empty
https://i.imgur.com/u6hfIh3.png
What could be the Problem here?
By changing RsyncArgs from Default to the Args wich I had (by Default) in BackupPC 3 it worked:
http://imgur.com/a/rYcHL
I am trying to allocate ram with xms = xmx on a sles10 x64 running under VMware.
When stopping the JVM the following error is thrown:
Java HotSpot(TM) 64-Bit Server VM warning: Failed to reserve shared memory (errno = 12).
The RAM of the VM is 8 GB and they are reserved.
The VM sees 8GB and it can be allocated during runtime via the XMX setting.
On another Virtual SLES10 with 16 GB RAM Reserved via VMWare I don't have a problem with allocation of RAM even when setting the hugepages and shmax only by echo it works fine.
echo 8000 > /proc/sys/vm/nr_hugepages
echo 8589934592 > /proc/sys/kernel/shmmax
Using the echo commands on the other SLES10 show no effect in /proc/meminfo at all.
here are my configs 1st on is the SLES10 where XMS fails to allocate.
# more /apps/liferay-portal-5.2.5/tomcat-5.5.27/bin/setenv.sh
JAVA_HOME=/apps/java5
JRE_HOME=/apps/java5
JAVA_OPTS="$JAVA_OPTS -Xms3G -Xmx3G -XX:NewRatio=3 -XX:MaxPermSize=256m -XX:SurvivorRatio=20 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -XX:+UsePa
rallelGC -XX:ParallelGCThreads=4 -XX:+UseLargePages -Xloggc:/apps/gc.log -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps -
XX:+PrintGCDetails -Dfile.encoding=UTF8 -Duser.timezone=GMT+2 -Djava.security.auth.login.config=$CATALINA_HOME/conf/jaas.config -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_C
LEAR_REFERENCES=false"
more /etc/sysctl.conf
kernel.shmmax=7516192768
vm.nr_hugepages=3072
vm.hugetlb_shm_group=1000
more /etc/securtiy/limits.conf
#
#
#* soft core 0
#* hard rss 10000
##student hard nproc 20
##faculty soft nproc 20
##faculty hard nproc 50
#ftp hard nproc 0
##student - maxlogins 4
* soft memlock unlimited
* hard memlock unlimited
tomcat soft memlock 6291456
tomcat hard memlock 6291456
# End of file
# cat /proc/meminfo
MemTotal: 7928752 kB
MemFree: 737004 kB
Buffers: 0 kB
Cached: 417368 kB
SwapCached: 0 kB
Active: 487428 kB
Inactive: 324072 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 7928752 kB
LowFree: 737004 kB
SwapTotal: 2097144 kB
SwapFree: 2097020 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 397208 kB
Mapped: 72180 kB
Slab: 62136 kB
CommitLimit: 2915792 kB
Committed_AS: 748576 kB
PageTables: 3292 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 7028 kB
VmallocChunk: 34359731271 kB
HugePages_Total: 3072
HugePages_Free: 2305
HugePages_Rsvd: 897
Hugepagesize: 2048 kB
# ipcs -l
Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 7340032
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1
Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 32
semaphore max value = 32767
Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536
# ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
pending signals (-i) 65536
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
On the Second VM it looks like this
cat /proc/meminfo
MemTotal: 16190448 kB
MemFree: 176812 kB
Buffers: 52752 kB
Cached: 755256 kB
SwapCached: 0 kB
Active: 713808 kB
Inactive: 425300 kB
HighTotal: 0 kB
HighFree: 0 kB
LowTotal: 16190448 kB
LowFree: 176812 kB
SwapTotal: 35658896 kB
SwapFree: 35658796 kB
Dirty: 932 kB
Writeback: 0 kB
AnonPages: 333620 kB
Mapped: 79120 kB
Slab: 37492 kB
CommitLimit: 36356744 kB
Committed_AS: 646284 kB
PageTables: 3584 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 23500 kB
VmallocChunk: 34359713907 kB
HugePages_Total: 7224
HugePages_Free: 6654
HugePages_Rsvd: 582
Hugepagesize: 2048 kB
JAVA_OPTS="$JAVA_OPTS -Xms2G -Xmx2G -XX:NewRatio=3 -XX:MaxPermSize=256m -XX:SurvivorRatio=20 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcI
nterval=1800000 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -XX:+UseLargePages -Xloggc:/apps/gc.log -XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplication
ConcurrentTime -XX:+PrintGC -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -Dfile.encoding=UTF8 -Duser.timezone=GMT+2 -Djava.security.auth.login.config=$CATALINA
_HOME/conf/jaas.config -Dorg.apache.catalina.loader.WebappClassLoader.ENABLE_CLEAR_REFERENCES=false"
hepide01pep1:~ # ipcs -l
------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 8388608
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1
------ Semaphore Limits --------
max number of arrays = 1024
max semaphores per array = 250
max semaphores system wide = 256000
max ops per semop call = 32
semaphore max value = 32767
------ Messages: Limits --------
max queues system wide = 16
max size of message (bytes) = 65536
default max size of queue (bytes) = 65536
Did you tried with the less size of heap.. may be with 2gig. You can just do simple try with
java -Xmx3G -version .Let us know how it goes and what it spit out.
I have stumbled with this issue (errno 12) on CentOS 5.9 as well using 16G heaps.
After verifying hard / soft memory locks were unlimited in /etc/security/limits.conf and still getting the error, I started running java -version as suggested by Anil, with all of my JAVA_OPTS intact.
I have found that removing the "-XX:+UseLargePages" option gets rid of that error.
I hope this helps you!