Push to GitLab: ! [remote rejected] master -> master (pre-receive hook declined) - repository

Attempted to push local repository to GitLab and got the following error:
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 220 bytes | 73.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: /opt/gitlab/embedded/service/gitlab-shell/lib/gitlab_net.rb:153:in `parse_who': undefined method `start_with?' for nil:NilClass (NoMethodError)
remote: from /opt/gitlab/embedded/service/gitlab-shell/lib/gitlab_net.rb:31:in `check_access'
remote: from /opt/gitlab/embedded/service/gitlab-shell/lib/gitlab_access.rb:27:in `block in exec'
remote: from /opt/gitlab/embedded/service/gitlab-shell/lib/gitlab_metrics.rb:50:in `measure'
remote: from /opt/gitlab/embedded/service/gitlab-shell/lib/gitlab_access.rb:26:in `exec'
remote: from hooks/pre-receive:30:in `<main>'
To <my_gitlab_server>:<path_to_repo>
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'git#<my_gitlab_server>:<path_to_repo>'
Using GitLab Community Edition version 11.4.0 installed on a Linux server and Git Bash version 2.19.1 64-bit for Windows.
I created an empty project in the GitLab UI, cloned this to a local repository and attempted a push from the local repository to this project, for which I have Maintainer status.
The pre-receive hook being called here is the default one placed by GitLab, not the one in the local repository (since it is .sample).

I have been getting the exact same error. So what I have just done is:
Accessed the gitlab server:
cd /opt/gitlab/embedded/service/gitlab-shell/hooks
mv pre-receive pre-receive-old
And I was able to push to my repository.
Why this happened and what exactly the file was blocking, beats me.

Related

Laravel 8 deploy to heroku failed on Pusher.php ($auth_key)

remote: Generating optimized autoload files
remote: > Illuminate\Foundation\ComposerScripts::postAutoloadDump
remote: > #php artisan package:discover --ansi
remote:
remote: In Pusher.php line 63:
remote:
remote: Pusher\Pusher::__construct(): Argument #1 ($auth_key) must be of type strin
remote: g, null given, called in /tmp/build_fb8ecd51/vendor/laravel/framework/src/I
remote: lluminate/Broadcasting/BroadcastManager.php on line 218
Here is my .env pusher configuration. I have "pusher/pusher-php-server": "^7.0" installed and tried change it to other version but still not working.
PUSHER_APP_ID=1368435
PUSHER_APP_KEY=fe949b1c86852b82bc6e
PUSHER_APP_SECRET=117bc32cf87c7d0b37f1
PUSHER_APP_CLUSTER=ap1
you need to go to your app on heroku -> settings->Reveal Config Vars->
then add this with key and value
PUSHER_APP_ID=1368435
PUSHER_APP_KEY=fe949b1c86852b82bc6e
PUSHER_APP_SECRET=117bc32cf87c7d0b37f1
PUSHER_APP_CLUSTER=ap1
hopefully that is help you

Tensorflow Apps No Longer Deploying To Heroku: Slug Size Too Large

I have a number of heroku applications that I've been able to update pretty seamlessly until recently. They make use of tensorflow and streamlit, and all give off similar messages on deployment:
-----> Compressing...
remote: ! Compiled slug size: 560.2M is too large (max is 500M).
remote: ! See: http://devcenter.heroku.com/articles/slug-size
remote:
remote: ! Push failed
remote: !
remote: ! ## Warning - The same version of this code has already been built: 5c0f686a86459f6e81627ce14770f7494f4bd244
remote: !
remote: ! We have detected that you have triggered a build from source code with version 5c0f686a86459f6e81627ce14770f7494f4bd244
remote: ! at least twice. One common cause of this behavior is attempting to deploy code from a different branch.
remote: !
remote: ! If you are developing on a branch and deploying via git you must run:
remote: !
remote: ! git push heroku <branchname>:main
remote: !
remote: ! This article goes into details on the behavior:
remote: ! https://devcenter.heroku.com/articles/duplicate-build-version
remote:
remote: Verifying deploy...
remote:
remote: ! Push rejected to dry-caverns-08193.
remote:
To https://git.heroku.com/dry-caverns-08193.git
! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/dry-caverns-08193.git'
I know it says the slug size is too large, but they have run previously with the same message, so I'm not sure that's what the issue is.
Here's my file setup:
app.py
Procfile
requirements.txt
setup.sh
my_model/
-- assets/
-- variables/
-- variables.index
-- variables.data-00000-of-000001
saved_model.pb
requirements.txt reads as follows:
tensorflow==2.*
streamlit==0.67.0
requests==2.24.0
requests-oauthlib==1.3.0
setup.sh reads as follows:
mkdir -p ~/.streamlit/
echo "\
[general]\n\
email = \"myemail#gmail.com\"\n\
" > ~/.streamlit/credentials.toml
echo "\
[server]\n\
headless = true\n\
enableCORS=false\n\
port = $PORT\n\
" > ~/.streamlit/config.toml
My immediate suspicion is that tensorflow is causing the slug size to be too large -- but again it worked w/ tensorflow previously, so I"m not sure why it would not work now.
Is there anything else it could be?
EDIT:
After looking at this question: Heroku: If you are developing on a branch and deploying via git you must run: I tried git push heroku master:main, but that did not work with the following displaying in the logs:
Push rejected to dry-caverns-08193.
remote:
To https://git.heroku.com/dry-caverns-08193.git
! [remote rejected] master -> main (pre-receive hook declined)
error: failed to push some refs to 'https://git.heroku.com/dry-caverns-08193.git'
If you are using the free dyno:
Make a change in the requirements.txt:
tensorflow-cpu
instead of
tensorflow
This would reduce your slug size considerably
Additionally, your issue may also be dependent on model weight size
Another Tip:
Use a `.slugignore' file if you are directly pulling the code from GitHub so that you can ignore anything like README to GitHub Actions to notebooks from being added to your Dyno
I haven't thoroughly tested this, but just placing it here in case it helps someone.
I have had many apps over the 500MB slug size limit. It warns but doesn't error when I do this. It is possibly because I am not on the free plan, but on a paid plan. So if someone is desperate to get it working, and cannot reduce the slug size below 500MB any other way, it may be a workable solution to go on the lowest paid plan for that app.

RPC failed; Curl 56 OpenSSL error when deploying app to heroku

I keep getting the following error when i try to push my application using heroku.
numerating objects: 62, done.
Counting objects: 100% (62/62), done.
Delta compression using up to 12 threads
Compressing objects: 100% (57/57), done.
Writing objects: 100% (62/62), 16.52 MiB | 25.21 MiB/s, done.
Total 62 (delta 5), reused 0 (delta 0), pack-reused 0
error: RPC failed; curl 56 OpenSSL SSL_read: error:140943FC:SSL routines:ssl3_read_bytes:sslv3 alert bad record mac,
errno 0
fatal: the remote end hung up unexpectedly
fatal: the remote end hung up unexpectedly
Everything up-to-date
What I've tried:
Increasing postBuffer size
git config --global http.postBuffer 52428800000
deleting/ reinitializing git init
somethings that may be relevant.
-The app size is about 140MB.
I use express, multer, sessions
Any help would be appreciated..
Had the same issue and I used this command to fix the issue:
git push heroku master --force

How do I install Radare2 on Windows?

I am trying to get Radare2 installed on my Windows machine. I do have Windows Subsystem for Linux up and running if that changes things. I have tried the git technique from their website:
git clone https://github.com/radare/radare2
cd radare2
sys/install.sh
This did strange things depending on what I did. There are some comments headed with the # symbol that explain what's going on.
#-----Here I clone the repo.
PS [*****] C:\Users\*****\AppData\Local\Programs> git clone https://github.com/radare/radare2
Cloning into 'radare2'...
remote: Enumerating objects: 81, done.
remote: Counting objects: 100% (81/81), done.
remote: Compressing objects: 100% (71/71), done.
remote: Total 215078 (delta 27), reused 17 (delta 10), pack-reused 214997
Receiving objects: 100% (215078/215078), 117.53 MiB | 817.00 KiB/s, done.
Resolving deltas: 100% (164658/164658), done.
Updating files: 100% (3934/3934), done.
#-----Here I cd into the new repo and run the install script.
PS [*****] C:\Users\*****\AppData\Local\Programs> cd radare2
#-----This next command opened a new window, which disappeared immediately.
PS [*****] C:\Users\*****\AppData\Local\Programs\radare2> sys/install.sh
#-----Calling bash and passing the script yielded some nice errors.
PS [*****] C:\Users\*****\AppData\Local\Programs\radare2> bash sys/install.sh
sys/install.sh: line 2: $'\r': command not found
: ambiguous redirect 4: 1
sys/install.sh: line 6: $'\r': command not found
sys/install.sh: line 11: syntax error near unexpected token `$'in\r''
'ys/install.sh: line 11: ` case "$1" in
#-----Here I fired up my WSL Ubuntu system and tried to run the script.
PS [*****] C:\Users\*****\AppData\Local\Programs\radare2> wsl
*****#DESKTOP-6L7K90U:/mnt/c/Users/*****/AppData/Local/Programs/radare2$ sys/install.sh
: not found.sh: 2:
sys/install.sh: 5: Syntax error: Bad fd number
*****#DESKTOP-6L7K90U:/mnt/c/Users/*****/AppData/Local/Programs/radare2$
At this point, I decided to try and use the Windows binary instead. I went to the download page and downloaded the windows binary, then unpacked it into my AppData programs folder. I then opened that folder and double-clicked on radare2.exe. This made a quick blip on the taskbar like a window was trying to open, which also immediately closed.
At this point, I suspect errors in the source code for Radare2 are causing it to crash almost immediately. Is this the case? Or do I need to do something different to get this running?
-----Solved-----
I went and experimented a little, including installing to a Linux VM using the git clone method. I have found that the windows binary is the way to go for this. to use it, unpack the downloaded binary, then open CMD/PowerShell in the radare2 directory, then run bin/radare2.exe or bin/r2.bat. You will need to manually add these to the path, though.

tensorflow build fails with "missing dependency" error

I'm completely new to bazel and tensorflow so the solution to this may be obvious to someone with some experience. My bazel build of tensorflow fails with a "missing dependency" error message. Here is the relevant sequence of build commands and output:
(tf-gpu)kss#linux-9c32:~/projects> git clone --recurse-submodules https://github.com/tensorflow/tensorflow tensorflow-nogpu
Cloning into 'tensorflow-nogpu'...
remote: Counting objects: 16735, done.
remote: Compressing objects: 100% (152/152), done.
remote: Total 16735 (delta 73), reused 0 (delta 0), pack-reused 16583
Receiving objects: 100% (16735/16735), 25.25 MiB | 911.00 KiB/s, done.
Resolving deltas: 100% (10889/10889), done.
Checking connectivity... done.
Submodule 'google/protobuf' (https://github.com/google/protobuf.git) registered for path 'google/protobuf'
Cloning into 'google/protobuf'...
remote: Counting objects: 30266, done.
remote: Compressing objects: 100% (113/113), done.
remote: Total 30266 (delta 57), reused 0 (delta 0), pack-reused 30151
Receiving objects: 100% (30266/30266), 28.90 MiB | 1.98 MiB/s, done.
Resolving deltas: 100% (20225/20225), done.
Checking connectivity... done.
Submodule path 'google/protobuf': checked out '0906f5d18a2548024b511eadcbb4cfc0ca56cd67'
(tf-gpu)kss#linux-9c32:~/projects> cd tensorflow-nogpu/
(tf-gpu)kss#linux-9c32:~/projects/tensorflow-nogpu> ./configure
Please specify the location of python. [Default is /home/kss/.venv/tf-gpu/bin/python]:
Do you wish to build TensorFlow with GPU support? [y/N]
No GPU support will be enabled for TensorFlow
Configuration finished
(tf-gpu)kss#linux-9c32:~/projects/tensorflow-nogpu> bazel build -c opt //tensorflow/tools/pip_package:build_pip_package
Sending SIGTERM to previous Bazel server (pid=8491)... done.
....
INFO: Found 1 target...
ERROR: /home/kss/.cache/bazel/_bazel_kss/b97e0e942a10977a6b42467ea6712cbf/external/re2/BUILD:9:1: undeclared inclusion(s) in rule '#re2//:re2':
this rule is missing dependency declarations for the following files included by 'external/re2/re2/perl_groups.cc':
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/stddef.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/stdarg.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/stdint.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/x86intrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/ia32intrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/mmintrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/xmmintrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/mm_malloc.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/emmintrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/immintrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/fxsrintrin.h'
'/usr/lib64/gcc/x86_64-suse-linux/4.8/include/adxintrin.h'.
Target //tensorflow/tools/pip_package:build_pip_package failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 144.661s, Critical Path: 1.18s
(tf-gpu)kss#linux-9c32:~/projects/tensorflow-nogpu>
The version of bazel I'm using is release 0.1.4, I'm running on openSUSE 13.2. I confirmed that the header files do exist which is probably expected:
(tf-gpu)kss#linux-9c32:~/projects/tensorflow-nogpu> ll /usr/lib64/gcc/x86_64-suse-linux/4.8/include/stddef.h
-rw-r--r-- 1 root root 13619 Oct 6 2014 /usr/lib64/gcc/x86_64-suse-linux/4.8/include/stddef.h
Note for anyone who finds this question:
Use Damien's answer below except that you have to use --crosstool_top rather than --crosstool. Also if you are building for GPU acceleration you will also need to modify the CROSSTOOL file in the tensorflow repo like:
(tf-gpu)kss#linux-9c32:~/projects/tensorflow-gpu> git diff third_party/gpus/crosstool/CROSSTOOL | cat
diff --git a/third_party/gpus/crosstool/CROSSTOOL b/third_party/gpus/crosstool/CROSSTOOL
index dfde7cd..b63f950 100644
--- a/third_party/gpus/crosstool/CROSSTOOL
+++ b/third_party/gpus/crosstool/CROSSTOOL
## -56,6 +56,7 ## toolchain {
cxx_builtin_include_directory: "/usr/lib/gcc/"
cxx_builtin_include_directory: "/usr/local/include"
cxx_builtin_include_directory: "/usr/include"
+ cxx_builtin_include_directory: "/usr/lib64/gcc"
tool_path { name: "gcov" path: "/usr/bin/gcov" }
# C(++) compiles invoke the compiler (as that is the one knowing where
You should tweak the C++ compiler.
To do so, here's the best way to proceed:
edit the file tools/cpp/CROSSTOOL (https://github.com/bazelbuild/bazel/blob/master/tools/cpp/CROSSTOOL) from your package path directory (should be in ~/.bazel/base_workspace, can be found with bazel info package_path) to add a line cxx_builtin_include_directory: /usr/lib64/gcc around line 100 (see https://github.com/bazelbuild/bazel/blob/master/tools/cpp/CROSSTOOL#L101).
Then echo "build --crosstool=//tools/cpp:toolchain" >>~/.bazelrc and then retries to build.
Sorry for the mess, we are working on making C++ toolchain work better out of the box.
Bazel complaints of system header files because compiler uses -MD (as opposed to -MMD) flag when generating dependences. While using -MD is reasonable for an environment that changes often, listing dependency on system header files causes the 'missing dependency declarations' errors.
What helped me was converting the '-MD' flag into '-MMD' flag in the compiler wrapper files third_party/gpus/crosstool/clang/bin/crosstool_wrapper_driver_is_not_gcc.tpl just before 'subprocess.call([CPU_COMPILER]...)':
cpu_compiler_flags = ['-MMD' if flag == '-MD' else flag for flag in cpu_compiler_flags]
and third_party/sycl/crosstool/computecpp.tpl, similar place:
computecpp_device_compiler_flags = ['-MMD' if flag == '-MD' else flag for flag in computecpp_device_compiler_flags]