Error when try to compile Chromium - chromium

I try to use the command ninja -C out/Debug chrome to compile Chromium.
However the error msg says that:
ninja error loading 'build.ninja': the system cannot find the file specified
ninja Entering dictory 'out/Debug'
Could I know what's the problem?
Thanks.

The out directory and its contents (including build.ninja) are created by running
python build\gyp_chromium
or
gclient runhooks
Executing either command from within /src should allow your compile to proceed.

On Windows machine!
When I was running gn gen out/Default it also gave me an error:
ERROR at //build/config/win/visual_studio_version.gni:27:7: Script returned non-zero exit code.
exec_script("../../vs_toolchain.py", [ "get_toolchain_dir" ], "scope")
^----------
Current dir: D:/Chromium/src/out/Goma/
Command: C:/Python27/python.exe -- D:/Chromium/src/build/vs_toolchain.py get_toolchain_dir
Returned 1 and printed out:
Please follow the instructions at https://chromium.googlesource.com/chromium/src/+/master/docs/windows_build_instructions.md
I did the following steps and it worked for me.
Set this variable. Reference (not sure about its purpose yet)
set DEPOT_TOOLS_WIN_TOOLCHAIN=0
Run the command gn gen out/Default
Run the build command again
autoninja -C out/Default chrome
It is also recommended to run gclient sync from out/Default directory.

After the switch to "gn" you could try:
gn gen out/Debug

Related

Installing LLReve using Cmake. Unknown BISON_TARGET error

I am getting the following error :
CMake Error at CMakeLists.txt:9 (BISON_TARGET):
Unknown CMake command "BISON_TARGET".
when I run the command :
cmake .. -GNinja
Please tell me what to do. I tried searching on google a lot and thus came up with the additions and finally ran the command :
cmake .. -D LLVM_DIR=/usr/lib/llvm-5.0/cmake/ -D FLEX_EXECUTABLE=/usr/local/Cellar/flex/2.5.37/bin/ -D FLEX_INCLUDE_DIR=/usr/local/Cellar/flex/2.5.37/include/ -D BISON_EXECUTABLE=/usr/bin/bison
but it still shows the same error :(.
Please someone help.
Your error is occurring because the BISON_TARGET function definition has not yet been supplied. This method, as commented, is supplied by FindBISON. The error indicates that either Bison was not found on your system (hopefully, you have it installed), or cmake was ran from the wrong directory. Bison is included in the top-level CMake file via:
find_package(BISON REQUIRED)
This line to include Bison must be called before using the BISON_TARGET CMake function. The LLReve instructions for compiling this repository are explicit about which directory to run the build commands in:
Go to the llreve directory and run
cd reve
mkdir build
cd build
cmake .. -GNinja
ninja
This would run on the CMake file in the llreve/reve directory, not the llreve/reve/reve directory. Please ensure you are running CMake from the correct location, as not running cmake on the top-level CMake file will often yield errors.

Chromium single executable build

I've been following the documentation here to build Chromium on Ubuntu 18.
I've been able to successfully build using these flags gn gen out/amd64 --args='is_official_build=true is_debug=false'
and this command to build
autoninja -C out/amd64 chrome
The problem is that the outputting chrome file has dependencies on all the other files in the directory. When I copy the chrome file to another directory and execute it, I get this error
[0608/170138.585281:ERROR:icu_util.cc(165)] Invalid file descriptor to ICU data received.
Trace/breakpoint trap (core dumped)
How do I go about building Chromium so I can execute it from a single file like how Puppeteer is a single executable?

How to debug the openjdk9 by netbeans8.2 in win10?

When I tried to debug the openjdk9 by netbeans8.2 in win10, I got the following error:
"\"D:/jdk9/jdk9/build/windows-x86_64-normal-server-fastdebug/jdk/bin/java.exe\":
not in executable format: File format not recognized"
How can I fix it?
I build the source code by command "./configure -with-freetype=/cygdrive/c/freetype -enable-debug -with-target-bits=64", then run make all, I also tried slowdebug, however, that also failed.
If I "run" the project instead of "debug", it runs successfully like below, so there is no issue for the file windows-x86_64-normal-server-fastdebug/jdk/bin/java.exe, it seems the gdb doesn't recognize the java.exe file.
Also I opened the openjdk source code from the location D:/jdk9/jdk/common/nb_native by netbeans, see below:
And tried to build it by netbeans, however, it produces the following error:
cd 'D:\jdk9\jdk\common'
sh ../configure --with-freetype=/cygdrive/c/freetype --with-debug-level=slowdebug --with-target-bits=64
/cygdrive/d/jdk9/jdk/configure: /cygdrive/d/jdk9/jdk/common/autoconf/configure: No such file or directory
PRE-BUILD FAILED (exit value 1, total time: 743ms)
I know that both paths /cygdrive/d/jdk9/jdk/configure and /cygdrive/d/jdk9/jdk/common/autoconf/configure exist.
This is how I configured the pre-built commands:
If you build the openjdk -with-target-bits = 64, then make sure that you installed a 64-bit gdb, or build the openjdk in 32-bit mode.
Your steps to import the nbproject looks correct.
Change Build => Pre-Build properties:
Set "Working Directory" to ../..
Set "Command Line" to sh ./configure ...

tutorials_example_trainer fails in debug mode (-c dbg)

The build for tutorials_example_trainer works fine in release mode (-c opt), but fails in debug mode (-c dbg).
Did anyone encounter this? It seems to be a bug.
The command I run:
bazel build -c dbg --config=cuda //tensorflow/cc:tutorials_example_trainer --verbose_failures
The build fails with the following message:
/usr/include/c++/4.8/mutex(125) (col. 5): error: calling a host
function("std::mutex_base::__mutex_base [subobject]") from a
__device function("std::mutex::mutex") is not allowed
< some warnings>
1 error detected in the compilation of
"/tmp/tmpxft_00005e78_00000000-10_cwise_op_gpu_log.cu.compute_52.cpp1.ii".
ERROR:
/home/uriv/git/tensorflow/tensorflow/tensorflow/core/BUILD:248:1:
output
'tensorflow/core/_objs/gpu_kernels/tensorflow/core/kernels/cwise_op_gpu_log.cu.pic.o'
was not created. ERROR:
/home/uriv/git/tensorflow/tensorflow/tensorflow/core/BUILD:248:1: not
all outputs were created.
Thanks.
You can workaround the problem by editing
tensorflow/third_party/eigen3/unsupported/Eigen/CXX11/src/Tensor/TensorDeviceType.h
and commenting out the following 2 lines of code:
static tensorflow::mutex m_devicePropInitMutex(tensorflow::LINKER_INITIALIZED);
and
tensorflow::mutex_lock l(m_devicePropInitMutex);
I'll push a proper fix to the tensorflow repository shortly.

Running seq2seq model error

I am trying to run the code in this tutorial.
When I try to run this command:
sudo bazel run -c opt tensorflow/models/rnn/translate/translate.py -- -- data_dir ../data/translate/
I get the following error:
...................
ERROR: Cannot run target //tensorflow/models/rnn/translate:translate.py: Not executable.
INFO: Elapsed time: 1.537s
ERROR: Build failed. Not running target.
Any ideas how to resolve?
It seems there are a lot of mistakes in the Tensorflow tutorial..
I was able to run it by removing the .py, and adding an extra -- before the options like:
bazel run -c opt tensorflow/models/rnn/translate/translate -- --data_dir /home/minsoo/tensorflowrnn/data
the directory part should be changed according to your system.
I ran it by going to the directory and running:
python translate.py