Gitlist Error: Filename, directory name, or volume label syntax is incorrect - gitlist

I am using gitlist for the first time and am kind of new to it and im still not the best at git. I have been trying to setup gitlist but I get this error:
Oops!The filename, directory name, or volume label syntax is incorrect.
Is there a way to fix this?
My config.ini file:
[git]
; ; client = '/usr/bin/git' ; Your git executable path
default_branch = 'main' ; Default branch when HEAD is detached
; ; repositories[] = '/repos/' ; Path to your repositories
; ; If you wish to add more repositories, just add a new line
; WINDOWS USERS
client = '"C:\Program Files\Git\bin\git.exe"' ; Your git executable path
repositories[] = 'C:\xampp\htdocs\gitlist\repos' ; Path to your repositories
; You can hide repositories from GitList, just copy this for each repository you want to hide or add a regex (including delimiters), eg. hidden[] = '/(.+)\.git/'
; hidden[] = '/home/git/repositories/BetaTest'
[app]
debug = false
cache = true
theme = "default"
title = "html title"
[clone_button]
; ssh remote
show_ssh_remote = false ; display remote URL for SSH
ssh_host = '' ; host to use for cloning via HTTP (default: none => uses gitlist web host)
ssh_url_subdir = '' ; if cloning via SSH is triggered using special dir (e.g. ssh://example.com/git/repo.git)
; has to end with trailing slash
ssh_port = '' ; port to use for cloning via SSH (default: 22 => standard ssh port)
ssh_user = 'git' ; user to use for cloning via SSH
ssh_user_dynamic = false ; when enabled, ssh_user is set to $_SERVER['PHP_AUTH_USER']
; http remote
show_http_remote = false ; display remote URL for HTTP
http_host = '' ; host to use for cloning via HTTP (default: none => uses gitlist web host)
use_https = true ; generate URL with https://
http_url_subdir = 'git/' ; if cloning via HTTP is triggered using virtual dir (e.g. https://example.com/git/repo.git)
; has to end with trailing slash
http_user = '' ; user to use for cloning via HTTP (default: none)
http_user_dynamic = false ; when enabled, http_user is set to $_SERVER['PHP_AUTH_USER']
; If you need to specify custom filetypes for certain extensions, do this here
[filetypes]
; extension = type
; dist = xml
; If you need to set file types as binary or not, do this here
[binary_filetypes]
; extension = true
; svh = false
; map = true
; set the timezone
[date]
timezone = UTC
format = 'd/m/Y H:i:s'
; custom avatar service
[avatar]
;url = '//gravatar.com/avatar/'
;query[] = 'd=identicon'
This is the folder structure:
├── gitlist/
| ├── the other stuff
│ └── repos/
| └── test/
| └── README.md
The repo is called test and it is in the repos/ folder.

I fixed this by taking away the two double quotes in the client variable:
Before:
client = '"C:\Program Files\Git\bin\git.exe"' ; Your git executable path
After:
client = 'C:\Program Files\Git\bin\git.exe' ; Your git executable path

Related

How do I correctly configure Gitlab Runners with S3/Minio as a distributed cache?

I am running Gitlab Runners on Openshift, and they are picking up jobs correctly. However, when running the job, the cache should be configured to use s3 caches, with a local minio service serving as s3 for the distributed cache. However, when running the job, the runner appears to ignore the setup and attempt to use a local cache (and indeed gets a permission denied error when trying to create it locally?)
config.toml:
concurrent = 8
check_interval = 0
[[runners]]
name = "GitLab Runner"
url = "https://gitlab.com/"
token = "XXX"
executor = "kubernetes"
builds_dir = "/tmp/build"
environment = ["HOME=/tmp/build"]
cache_dir = "/tmp/cache"
[runners.kubernetes]
namespace = "gitlab-runners"
privileged = false
host = ""
cert_file = ""
key_file = ""
ca_file = ""
image = ""
cpus = ""
memory = ""
service_cpus = ""
service_memory = ""
helper_cpus = ""
helper_memory = ""
helper_image = ""
[runners.cache]
Type = "s3"
Shared = true
Path = "gitlab"
[runners.cache.s3]
ServerAddress = "minio-service"
AccessKey = "XXX"
SecretKey = "XXX"
BucketName = "gitlab-runner"
BucketLocation = "eu-west-1"
Insecure = true
Cache job output:
Initialized empty Git repository in /tmp/XXXX/XXX/.git/
Created fresh repository.
Checking out e57da922 as develop...
Skipping Git submodules setup
Restoring cache
00:01
Checking cache for develop-1...
FATAL: mkdir ../../../../cache: permission denied
Failed to extract cache
Executing "step_script" stage of the job script
02:02
$ npm install
added 1966 packages, and audited 1967 packages in 2m
found 0 vulnerabilities
Saving cache for successful job
00:01
Creating cache develop-1...
node_modules/: found 44671 matching files and directories
FATAL: mkdir ../../../../cache: permission denied
Failed to create cache
Cleaning up file based variables
00:00
Job succeeded
Second job (pulling from cache output):
Restoring cache
00:00
Checking cache for develop-1...
FATAL: file does not exist
Failed to extract cache
Executing "step_script" stage of the job script

404 when executing docker push to gitlab-container-registry

I have installed gitlab-ce 13.2.0 on my server and the container-registry was immediately available.
from a other sever (or my local machine) I can login, but when pushing a image to the container-registry I get a 404-error: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE html>\n<html>\n<head>...
in my gitlab.rb I have:
external_url 'https://git.xxxxxxxx.com'
nginx['enable'] = true
nginx['client_max_body_size'] = '250m'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/trusted-certs/xxxxxxxx.com.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/trusted-certs/xxxxxxxx.com.key"
nginx['ssl_protocols'] = "TLSv1.1 TLSv1.2"
registry_external_url 'https://git.xxxxxxxx.com'
what is confusing, is that the registry_external_url is the same as the external_url. There are those lines in the gitlab.rb:
### Settings used by GitLab application
# gitlab_rails['registry_enabled'] = true
# gitlab_rails['registry_host'] = "git.xxxxxxxx.com"
# gitlab_rails['registry_port'] = "5005"
# gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
But when I uncomment this, I cannot login.
what can be the problem here?
This is actually because you are using https port without proxying the registry in nginx.
Fix these lines according to the following in gitlab.rb:
registry_nginx['enable'] = true
registry_nginx['listen_https'] = true
registry_nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.YOUR_DOMAIN.gtld'
You don't need to touch nginx['ssl_*] parameters when you are using letsencrypt since the chef would take care.
How is your image named? Your image name must match exactly not only the registry URL, but project too.
You can't just build "myimage:latest" and push it. It must be like git.xxxxxxxx.com/mygroup/myproject:latest. You can obtain correct name from $CI_REGISTRY_IMAGE predefined variable.

How to use elm reactor to access files via http request?

elm 0.19
$ mkdir myprj; cd myprj; elm init; elm install elm/http
then create src/test.elm and src/test.txt:
$ tree
.
├── elm.json
└── src
├── test.elm
└── test.txt
$ elm reactor
then navigate to:
http://localhost:8000/src/test.elm
so the browser window shows:
This is a headless program, meaning there is nothing to show here.
I started the program anyway though, and you can access it as `app` in the developer console.
but the browser console shows:
Failed to load resource: the server responded with a status of 404 (Not Found) test.text:1
Why can't elm reactor locate test.txt?
test.txt:
hi
test.elm:
import Http
init () =
( "", Http.send Resp <| Http.getString "test.text" )
type Msg
= Resp (Result Http.Error String)
update msg model =
case msg of
Resp result ->
case result of
Ok body ->
( Debug.log "ok" body, Cmd.none )
Err _ ->
( "err", Cmd.none )
subscriptions model =
Sub.none
main =
Platform.worker
{ init = init
, update = update
, subscriptions = subscriptions
}
Solved
In test.elm, the url "test.txt" was falsely spelled to "test.text".
Your comment has different file extensions. You said you created src/test.txt but you are getting a 404 because you are asking for a .text extension.
Try going to http://localhost:8000/src/test.txt

Cannot have file provisioner working with Terraform on DigitalOcean

I try to use Terraform to create a DigitalOcean node on which consul is installed.
I'm using the following .tf file but it hangs up and do not copy the consul .zip file onto the droplet.
I got the following error message after a couple of minutes:
ssh: handshake failed: ssh: unable to authenticate, attempted methods
[none publickey], no supported methods remain
The droplets are correctly created though. I can login on command line with the key I specified (thus not specifying password). I'm guessing the connection part might be faulty but not sure what I'm missing.
Any idea ?
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"
}
# Create nodes
resource "digitalocean_droplet" "consul" {
count = "1"
image = "ubuntu-14-04-x64"
name = "consul-${count.index+1}"
region = "lon1"
size = "1gb"
ssh_keys = ["7b:51:d3:e3:ae:6e:c6:e2:61:2d:40:56:17:54:fc:e3"]
connection {
type = "ssh"
user = "root"
agent = true
}
provisioner "file" {
source = "consul_0.7.1_linux_amd64.zip"
destination = "/tmp/consul_0.7.1_linux_amd64.zip"
}
provisioner "remote-exec" {
inline = [
"sudo unzip -d /usr/local/bin /tmp/consul_0.7.1_linux_amd64.zip"
]
}
}
Terraform requires that you specify the private SSH key to use for the connection with private_key You can create a new variable containing the path to your private key for use with Terraform's file interpolation function:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("${var.private_key_path}")}"
}
You face this issue, because you have a ssh key protected by a password. To solve this issue you should generate a key without password.

Redmine.pm does not work with Authen::Simple::LDAP

I'm using the Bitnami Redmine Stack on a Windows Server. I have configured Redmine to use LDAP authentication and it works. Now I'd like to have the SVN authentication via Redmine and LDAP. I can log-in with a Redmine-Account but not with LDAP. In the Apache error.log was an error like this:
[Authen::Simple::LDAP] Failed to bind with dn 'REDMINE'. Reason: 'Bad file descriptor'
'REDMINE' is the user which is used for the LDAP.
It seems to be a problem in the Redmine.pm which is written in Perl, but I don't know much about Perl.
I found a part of the Perl-Code which, I think, causes the error:
my $sthldap = $dbh->prepare(
"SELECT host,port,tls,account,account_password,base_dn,attr_login from auth_sources WHERE id = ?;"
);
$sthldap->execute($auth_source_id);
while (my #rowldap = $sthldap->fetchrow_array) {
my $bind_as = $rowldap[3] ? $rowldap[3] : "";
my $bind_pw = $rowldap[4] ? $rowldap[4] : "";
if ($bind_as =~ m/\$login/) {
# replace $login with $redmine_user and use $redmine_pass
$bind_as =~ s/\$login/$redmine_user/g;
$bind_pw = $redmine_pass
}
my $ldap = Authen::Simple::LDAP->new(
host => ($rowldap[2] eq "1" || $rowldap[2] eq "t") ? "ldaps://$rowldap[0]:$rowldap[1]" : "ldap://$rowldap[0]",
port => $rowldap[1],
basedn => $rowldap[5],
binddn => $bind_as,
bindpw => $bind_pw,
filter => "(".$rowldap[6]."=%s)"
);
my $method = $r->method;
$ret = 1 if ($ldap->authenticate($redmine_user, $redmine_pass) && (($access_mode eq "R" && $permissions =~ /:browse_repository/) || $permissions =~ /:commit_access/));
}
$sthldap->finish();
undef $sthldap;
Does anyone have a fix for this problem? Or maybe a working alternative to the standard Redmine.pm?
The problem was that I don't have installed "IO::Socket::IP" and "latest perl-ldap" via the cpan console and strawberry-perl doesn't include it in the install. After I installed these with the following CMD commands it worked fine!
> cpan
cpan> install IO::Socket::IP
cpan> install latest perl-ldap