TortoiseSVN Can't Authenticate - apache

After my previous problem, TortoiseSVN Can't Connect was resolved, I ran into a new problem.
On the linux server hosting my svn repository, in the repository's directory, there is a conf/svnserve.conf file. In this file, I have the option:
anon-access = none | read | write
Initially, this line was commented out and the default value must have been read.
Of course, I want to set anon-access = none, and I want auth-access = write (which is the default).
But when I set anon-access = none, when I try to browse with TortoiseSVN Repository Browser
using url svn://host:port/repositoryname, I get the error:
Unable to connect to a repository at URL
'svn://host:port/repositoryname' No access allowed to this repository
I'd like to successfully authenticate without ssh if possible, because I gather ssh has more moving parts and might be a little slower.
The server is CloudLinux Server release 5.8
The svn server information follows. I have only tried svn protocol so far.
svn, version 1.6.17 (r1128011) compiled Jul 26 2012, 03:59:19
Copyright (C) 2000-2009 CollabNet. Subversion is open source software,
see http://subversion.apache.org/ This product includes software
developed by CollabNet (http://www.Collab.Net/).
The following repository access (RA) modules are available:
ra_neon : Module for accessing a repository via WebDAV protocol using Neon.
handles 'http' scheme
ra_svn : Module for accessing a repository using the svn network protocol.
with Cyrus SASL authentication
handles 'svn' scheme
ra_local : Module for accessing a repository on local disk.
handles 'file' scheme
ra_serf : Module for accessing a repository via WebDAV protocol using serf.
handles 'http' scheme
handles 'https' scheme
I hope this is a good question because this is kind of the "out of the box" behavior connecting to svn with windows, which might be pretty common when someone adds svn to a shared hosting account.
Thank you!

Set these lines in your svnserve.conf file:
19 anon-access = none
20 auth-access = write
[...]
27 password-db = passwd
[...]
39 realm = Name-of-your-repository
46 force-username-case = lower
The line numbers are approximate.
The realm should equal the name of your repository. It can be anything. The password-db is who is authorized to use the repository. By default, the line is NOPed out.
Next, you'll edit the passwd file that's in the same directory. The format is very simple:
<userName> = <password>
There are two NOPed entries that show you how it's done.

Related

FreeRADIUS authentication requests not being logged to the specified file

I'm setting up a new FreeRADIUS v3.0.20 system (Ubuntu 20.04) and having a problem getting auth requests to go to a separate file. I was successful doing it on my older (v2.2.8) system but am apparently missing something here. I checked the documentation, ran both systems in debug mode, but no luck.
I added the "requests..." line to the log section of /etc/freeradius/3.0/radiusd.conf
log {
destination = files
file = ${logdir}/radius.log
requests = ${logdir}/radiusd-%{%{Virtual-Server}:-DEFAULT}-%Y%m%d.log
Requests are being logged to /var/log/freeradius/radius.log:
Wed Sep 22 16:10:11 2021 : Auth: (0) Login OK: [user1] (from client Switches port 0 cli 8.1.1.1)
but I want them in a file similar to "/var/log/freeradius/radiusd-DEFAULT-20210922.log"
Is there something I'm missing in the sites-enables/default file or something else dumb I've missed? Thanks!

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

mbsync authentication failed

I was able to configure mbsync and mu4e in order to use my gmail account (so far everything works fine). I am now in the process of using mu4e-context to control multiple accounts.
I cannot retrieve emails from my openmailbox account whereas I receive this error
Reading configuration file .mbsyncrc
Channel ombx
Opening master ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave ombx-local...
Connection is now encrypted
Logging in...
IMAP command 'LOGIN <user> <pass>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
In other posts I've seen people suggesting AuthMechs Login or PLAIN but mbsync doesn't recognizes the command. Here is my .mbsyncrc file
IMAPAccount openmailbox
Host imap.ombx.io
User user#openmailbox.org
UseIMAPS yes
# AuthMechs LOGIN
RequireSSl yes
PassCmd "echo ${PASSWORD:-$(gpg2 --no-tty -qd ~/.authinfo.gpg | sed -n 's,^machine imap.ombx.io .*password \\([^ ]*\\).*,\\1,p')}"
IMAPStore ombx-remote
Account openmailbox
MaildirStore ombx-local
Path ~/Mail/user#openmailbox.org/
Inbox ~/Mail/user#openmailbox.org/Inbox/
Channel ombx
Master :ombx-remote:
Slave :ombx-local:
# Exclude everything under the internal [Gmail] folder, except the interesting folders
Patterns *
Create Slave
Expunge Both
Sync All
SyncState *
I am using Linux Mint and my isync is version 1.1.2
Thanks in advance for any help
EDIT: I have run a debug option and I have upgraded isync to version 1.2.1
This is what the debug returned:
Reading configuration file .mbsyncrc
Channel ombx
Opening master store ombx-remote...
Resolving imap.ombx.io... ok
Connecting to imap.ombx.io (*.*.10*.16*:*9*)...
Opening slave store ombx-local...
pattern '*' (effective '*'): Path, no INBOX
got mailbox list from slave:
Connection is now encrypted
* OK [CAPABILITY IMAP4rev1 LITERAL+ SASL-IR LOGIN-REFERRALS ID ENABLE IDLE AUTH=PLAIN AUTH=LOGIN] Openmailbox is ready to
handle your requests.
Logging in...
Authenticating with SASL mechanism PLAIN...
>>> 1 AUTHENTICATE PLAIN <authdata>
1 NO [AUTHENTICATIONFAILED] Authentication failed.
IMAP command 'AUTHENTICATE PLAIN <authdata>' returned an error: NO [AUTHENTICATIONFAILED] Authentication failed.
My .msyncrc file now contains these options instead
SSLType IMAPS
SSLVersions TLSv1.2
AuthMechs PLAIN
At the end, the solution was to use the correct password. Since openmailbox uses an application password for third-party e-mail clients I was using the wrong (original) password instead of the application password.

Dockerized DCTM 7.3 and Dockerized DCTM REST 7.3 not able to retrieve global registry or its documents

My setup consists of
Documentum Content Server 7.3 (dctm-cs) running in a docker container (from EMC)
Documentum REST Services 7.3 (dctm-rest) running in a docker container (from EMC)
I am definitively able to get information from within dctm by running queries against it with iapi, for example:
API> ?,c,select user_name from dm_user enable (return_top 5)
user_name
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
docu
ubuntudb
dm_superusers
dm_superusers_dynamic
dm_browse_all
(5 rows affected)
I am also able to $ curl http://localhost:8080/dctm-rest/repositories.json from both the dctm-rest container as well as its host container and get the results:
{"id":"http://localhost:8080/dctm-rest/repositories","title":"Repositories","author":[{"name":"EMC Documentum"}],"updated":"2017-08-16T21:42:44.177+00:00","page":1,"items-per-page":1000,"total":1,"links":[{"rel":"self","href":"http://localhost:8080/dctm-rest/repositories.json"}],"entries":[{"id":"http://localhost:8080/dctm-rest/repositories/ubuntudb","title":"ubuntudb","summary":"ubuntudb","updated":"2017-08-16T21:42:44.178+00:00","published":"2017-08-16T21:42:44.178+00:00","links":[{"rel":"edit","href":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}],"content":{"type":"application/json","src":"http://localhost:8080/dctm-rest/repositories/ubuntudb.json"}}]}
Attempting to $ curl http://localhost:8080/dctm-rest/repositories/ubuntudb.json, however hangs indefinitely.
I have attempted to provide the default username and password via basic HTTP authentication, also with the same results.
The contents of the dfc.properties file in dctm-cs:
dfc.data.dir=/opt/dctm
dfc.tokenstorage.dir=/opt/dctm/apptoken
dfc.tokenstorage.enable=false
dfc.docbroker.host[0]=ubuntustateless
dfc.docbroker.port[0]=1489
dfc.crypto.repository=ubuntudb
dfc.session.secure_connect_default=try_secure_first
dfc.globalregistry.repository=ubuntudb
dfc.globalregistry.username=dm_bof_registry
dfc.globalregistry.password=AAAAEL9wp8c6k3K2UTQJwTYO5kMnE3rDrHJVDL+LijAg+zLk
The contents of the dfc.properties file in dctm-rest:
dfc.docbroker.host[0]=172.18.0.1
dfc.docbroker.port[0]=1489
#Add the global registry repository name to the following key.
dfc.globalregistry.repository=ubuntudb
#Add the username of the global registry user to the following key.
dfc.globalregistry.username=dmadmin
#Add an encrypted password value for the following key.
dfc.globalregistry.password=password
dfc.exception.include_id=false
dfc.exception.include_decoration=false
I have attempted to change the value of dfc.globalregistry.username to be the same as in dctm-cs, to no avail and same hang on request.
I have also attempted to use both encrypted and decrypted values for dfc.globalregistry.password, in both dctm-cs and dctm-rest also with no luck.

Only show repos in gitweb that user has access to via Gitolite

I have gitolite setup and working with SSH key based auth. I can control access to repos via the gitolite-admin.git repo and the conf file. All of this works great over SSH but I would like to use GitWeb as a quick way to view the repos.
GitWeb is working great now but shows all repositories via the web interface. So my goal here is to:
Authenticate users in apache2 via PAM, I already have the Ubuntu server authenticating aginst AD and all the users are available. This should not be an issue.
Use the user name logged in with the check gitolite permissions
Display apropriate REPOS in the web interface.
Does anyone have a starting point for this? The Apache part shouldn't be difficult, and I'll set it to auth all fo the /gitweb/ url. I dont know how to pass that username around and authorize it against gitolite. Any ideas?
Thanks,
Nathan
Yes, it is possible, but you need to complete the gitweb config scripts in order to call gitolite.
The key is in the gitweb_config.perl: if that file exists, gitweb will include and call it.
See my gitweb/gitweb_config.perl file:
our $home_link_str = "ITSVC projects";
our $site_name = "ITSVC Gitweb";
use lib (".");
require "gitweb.conf.pl";
In gitweb/gitweb.conf.pl (custom script), I define the official callback function called by gitweb: export_auth_hook: that function will call gitolite.
use Gitolite::Common;
use Gitolite::Conf::Load;
#$ENV{GL_USER} = $cgi->remote_user || "gitweb";
$export_auth_hook = sub {
my $repo = shift;
my $user = $ENV{GL_USER};
# gitweb passes us the full repo path; so we strip the beginning
# and the end, to get the repo name as it is specified in gitolite conf
return unless $repo =~ s/^\Q$projectroot\E\/?(.+)\.git$/$1/;
# check for (at least) "R" permission
my $ret = &access( $repo, $user, 'R', 'any' );
my $res = $ret !~ /DENIED/;
return ($ret !~ /DENIED/);
};
From the comments:
GL_USER is set because of the line:
$ENV{GL_USER} = $cgi->remote_user || "gitweb";
$cgi->remote_user will pick the environment REMOTE_USER set by any Apache Auth module which has completed the authentication (like in this Apache configuration file).
You can print it with a 'die' line.
"Could not find Gitolite/Rc.pm" means the INC variable used by perl doesn't contain $ENV{GL_LIBDIR}; (set to ~/gitolite/lib or <any_place_where_gitolite_was_installed>/lib).
That is why there is a line in the same gitweb/gitweb.conf.pl file which adds that to INC:
unshift #INC, $ENV{GL_LIBDIR};
use lib $ENV{GL_LIBDIR};
use Gitolite::Rc;
Edit from Nat45928: in my case I needed to insert my home path into all the '#H#' entries. That solved all of my issues right away.