Failed to run bcc-tools profiler - bcc-bpf

I am attempting to run the bcc tool's "profile.py", and ran into the following error:
./bcc/tools/profile.py
Sampling at 49 Hertz of all threads by user + kernel stack... Hit Ctrl-C to end.
In file included from <built-in>:1:
././include/linux/kconfig.h:5:10: fatal error: 'generated/autoconf.h' file not found
#include <generated/autoconf.h>
^~~~~~~~~~~~~~~~~~~~~~
1 error generated.
Traceback (most recent call last):
File "./bcc/tools/profile.py", line 265, in <module>
b = BPF(text=bpf_text)
File "/usr/lib/python2.7/dist-packages/bcc/__init__.py", line 325, in __init__
raise Exception("Failed to compile BPF text")
The linux headers are found under the normal location of /lib/modules. This is running under kernel version 4.19.88.
Any pointer will be appreciated.

Related

ninja WebRtc build error :ninja: build stopped: subcommand failed

WSL + Ubuntu18.04 + webrtc
This project is proper functioning in Ubuntu 18.04 doesn't with WSL;
gn command:
gn gen out/SDK --args='target_os="android" target_cpu="arm" is_debug=true rtc_use_h264=true use_openh264=true rtc_libvpx_build_vp9=false rtc_build_libvpx=true rtc_include_tests=false rtc_include_ilbc=false rtc_include_pulse_audio=false ffmpeg_branding="Chrome"'
error Info :
ninja: Entering directory `out/SDK'
[2138/5820] ACTION
//api/video:video_frame_enums(//build/toolchain/android:android_clang_arm)
FAILED: gen/api/video/video_frame_enums.srcjar
python ../../build/android/gyp/java_cpp_enum.py --depfile gen/api/video/video_frame_enums.d --srcjar=gen/api/video/video_frame_enums.srcjar ../../api/video/video_codec_type.h
Traceback (most recent call last):
File "../../build/android/gyp/java_cpp_enum.py", line 437, in <module>
DoMain(sys.argv[1:])
File "../../build/android/gyp/java_cpp_enum.py", line 429, in DoMain
for output_path, data in DoGenerate(input_paths):
File "../../build/android/gyp/java_cpp_enum.py", line 329, in DoGenerate
source_path)
Exception: No enums found in ../../api/video/video_codec_type.h
Did you forget prefixing enums with "// GENERATED_JAVA_ENUM_PACKAGE: foo"?
[2140/5820] ACTION //api:rtp_parameters_enums(//build/toolchain/android:android_clang_arm)
FAILED: gen/api/rtp_parameters_enums.srcjar
python ../../build/android/gyp/java_cpp_enum.py --depfile gen/api/rtp_parameters_enums.d --srcjar=gen/api/rtp_parameters_enums.srcjar ../../api/rtp_parameters.h
Traceback (most recent call last):
File "../../build/android/gyp/java_cpp_enum.py", line 437, in <module>
DoMain(sys.argv[1:])
File "../../build/android/gyp/java_cpp_enum.py", line 429, in DoMain
for output_path, data in DoGenerate(input_paths):
File "../../build/android/gyp/java_cpp_enum.py", line 329, in DoGenerate
source_path)
Exception: No enums found in ../../api/rtp_parameters.h
Did you forget prefixing enums with "// GENERATED_JAVA_ENUM_PACKAGE: foo"?
[2143/5820] CC obj/third_party/libaom/libaom_intrinsics_neon/av1_inv_txfm_neon.o
ninja: build stopped: subcommand failed.
How can I change it to compile properly?

why runtime error happen? After import Mecab

what is problem?
I use python3 windows10 environment is Anaconda
m=MeCab.Tagger("-Ochasen")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\a.sakata\Anaconda3\lib\site-packages\MeCab.py", line 253, in __init__
_MeCab.Tagger_swiginit(self, _MeCab.new_Tagger(*args))
RuntimeError
Your dicrc probably doesn't include the chasen format. This causes the Mecab C lib to die with an error, which results in the runtime error in Python.
I get the same error, and if I run mecab on the command line I get this output:
$ mecab -Ochasen
writer.cpp(63) [!tmp.empty()] unkown format type [chasen]
If you don't get an error on the command line the cause might be something else.

JHBuild runtime error "Failed to close %s stream" (MacOS)

I started a JHBuild with the wrong arguments (forgot 'build') and hit control-C at what appears to have been the wrong moment.
Now when I try any JHBuild command, e.g. jhbuild bootstrap, I get:
Traceback (most recent call last):
File "/Users/gnucashdev/Source/jhbuild/jhbuild/config.py", line 197, in load
execfile(filename, config)
File "/Users/gnucashdev/.jhbuildrc", line 408, in <module>
execfile(_userrc)
File "/Users/gnucashdev/.jhbuildrc-custom", line 22, in <module>
setup_sdk()
File "/Users/gnucashdev/.jhbuildrc", line 260, in setup_sdk
gcc = _popen("xcrun -f gcc")
File "/Users/gnucashdev/.jhbuildrc", line 41, in _popen
raise RuntimeError, "Failed to close %s stream" % cmd_arg
RuntimeError: Failed to close xcrun -f gcc stream
jhbuild: could not load config file
I've tried re-installing jhbuild with
./gtk-osx-build-setup.sh
but the next step - i.e.
jhbuild bootstrap
yields the above error. Some file appears to have been compromised, perhaps truncated. But I'm having a hard time figuring out which.
I had the same error. xcrun is returning an error, probably due to an incorrect environment variable. In my case, I was running jhbuild while in a jhbuild shell, which caused the SDKDIR environment variable to contain 2 copies of the path to the SDK directory. Exiting the jhbuild shell fixed the problem.

how to compile the tutorial program on tensorflow

After configuring the tensorflow, I tried to run the command
bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
But an error occured which I tried every possible but failed to solve.
ERROR: Skipping '//tensorflow/cc:tutorials_example_trainer': error loading package 'tensorflow/cc': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '#local_config_cuda//cuda': Traceback (most recent call last):
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 1042
_create_local_cuda_repository(repository_ctx)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 975, in _create_local_cuda_repository
_host_compiler_includes(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 145, in _host_compiler_includes
get_cxx_inc_directories(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 120, in get_cxx_inc_directories
set(includes_cpp)
The set constructor for depsets is deprecated and will be removed. Please use the depset constructor instead. You can temporarily enable the deprecated set constructor by passing the flag --incompatible_disallow_set_constructor=false
WARNING: Target pattern parsing failed.
ERROR: error loading package 'tensorflow/cc': Encountered error while reading extension file 'cuda/build_defs.bzl': no such package '#local_config_cuda//cuda': Traceback (most recent call last):
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 1042
_create_local_cuda_repository(repository_ctx)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 975, in _create_local_cuda_repository
_host_compiler_includes(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 145, in _host_compiler_includes
get_cxx_inc_directories(repository_ctx, cc)
File "/home/manyz/tensorflow/third_party/gpus/cuda_configure.bzl", line 120, in get_cxx_inc_directories
set(includes_cpp)
The set constructor for depsets is deprecated and will be removed. Please use the depset constructor instead. You can temporarily enable the deprecated set constructor by passing the flag --incompatible_disallow_set_constructor=false
INFO: Elapsed time: 2.293s
FAILED: Build did NOT complete successfully (0 packages loaded)
currently loading: tensorflow/cc
Note that: I've installed the CUDA8.0, cuDNN 5.0 and Bazel 0.6.0, My system is Ubuntu 16.04.
It seems there is already an issue open for this problem: https://github.com/tensorflow/tensorflow/issues/11859. Last comment says that the issue can be fixed by editing line 120 in tensorflow/third_party/gpus/cuda_configure.bzl. If that doesn't help I'd subscribe to the issue and wait for a fix.

Kivy - OSError, Still Apps Run Successfully?

Every time I run a Kivy app I see OSError (see it in last line of my given example). Even though my app runs successfully. What could be the cause of this error?
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 763, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib/python2.7/dist-packages/kivy/input/providers/mtdev.py", line 197, in _thread_run
_device = Device(_fn)
File "/usr/lib/python2.7/dist-packages/kivy/lib/mtdev.py", line 131, in __init__
self._fd = os.open(filename, os.O_NONBLOCK | os.O_RDONLY)
OSError: [Errno 13] Permission denied: '/dev/input/event5'
This error is not important, it just means that kivy checked the possible input providers in your OS and found that this one is forbidden.