Configuring GSSAPI and Cyrus SASL - sasl

I've been trying to configure GSSAPI and Cyrus SASL, following this guide.
It seems pretty straightforward, except for the very first step, "1. Compile the Cyrus-SASL distribution with the GSSAPI plugin for your favorite GSS-API mechanism."
I can't figure this out, and I have nowhere else to go. I've looked through the output of "./configure --help" to try to understand how to include certain plugins, but it isn't made obvious if the options I enter while configuring are valid or not. For example, I can type "./configure --enable-asdf" and it goes through without a hiccup. So I have no idea if I'm doing things properly, or even close to, when trying to set this up.
Things I've tried: "./configure --enable-gssapi" (amongst many slight and vast variations to this command). After configuring and running "make" and "make install", I've tried entering "pluginviewer | grep -i gssapi" to see if I have the GSSAPI mechanism installed. Sure enough, I do not.

Related

Perl6: rakudobrew cannot build moar

I'd like to upgrade to the newest version of Perl6,
rakudobrew build moar
Update git reference: rakudo
Cloning into 'rakudo'...
fatal: unable to connect to github.com:
github.com[0: 140.82.114.4]: errno=Connection timed out
Failed running git clone git://github.com/rakudo/rakudo.git rakudo at /home/con/.rakudobrew/bin/rakudobrew line 57.
main::run("git clone git://github.com/rakudo/rakudo.git rakudo") called at /home/con/.rakudobrew/bin/rakudobrew line 397
main::update_git_reference("rakudo") called at /home/con/.rakudobrew/bin/rakudobrew line 368
main::build_impl("moar", undef, "") called at /home/con/.rakudobrew/bin/rakudobrew line 115
this is just a simple connection failure, but how do I fix this?
Your connection problem is not really anything to do with any P6 related software, or in fact any software you're using. It is, as you say, "just a simple connection failure". And most such failures are transient and "fix themselves". As JJ notes, in such scenarios you just wait and then things start working again.
So by the time you read this it'll probably be working for you again without you having fixed anything. But I'm writing an answer anyway with these sections:
Consider not using rakudobrew
Connection problems that "fix themselves"
Connection problems you investigate or fix yourself
Getting around single points of failure
Consider not using rakudobrew
The main purpose of rakudobrew is to support installation of many versions of Rakudo simultaneously and the main audience for the tool is folk hacking on the Rakudo compiler, not those merely using it.
If you're just a regular user, not someone developing the Rakudo compiler and/or don't need to have multiple versions of Rakudo, with complete source code, installed simultaneously, then consider just downloading and installing Rakudo files directly, eg. via rakudo.org/files, rather than via rakudobrew.
Connection problems that "fix themselves"
rakudobrew failed because a git clone ... command failed because the connection with the github.com server timed out.
A server timing out when doing something that usually works using a connection that usually works is likely a transient problem, aka a "please try later" problem.
Transient problems typically "fix themselves" a few seconds, minutes or hours later.
If there's still a problem when you try again, and you want to spend time trying to find out what's going on officially, then look for a status page for that server.
Here are two status pages I know of for github.com:
https://www.githubstatus.com/
https://twitter.com/githubstatus?lang=en-gb.
And for unofficial scuttlebutt I suggest reading the twitter feed.
For me, right now, github.com is working fine and the status page says all systems are go.
So it should now be working for you too.
If it's not, then you can wait longer, or investigate. It you want to investigate, start by looking at the status pages above.
Connection problems you investigate or fix yourself
If github claims it's working fine then there's presumably a problem with your local internet "on-ramp" (your system or your internet service provider's) or somewhere further afield between your on-ramp and the server you're failing to connect to. (You can only know approximately where the server is based on which region of the world administers the IP address the server is associated with at any given moment.)
The next place to look will be places like the internet traffic report; this indicates traffic jams and the like across the planet. (Ignore the visual display, which is broken on some browsers, and click on the links in the table to drill down.)
If it's all green between you and the region that administers the IP address of the server you're failing to connect to, then the next place to turn would be your system's administrator and/or ISP.
Failing that, then perhaps you can ask a question at a sister stackexchange site like serverfault.com or superuser.com.
Getting around single points of failure
Perhaps you were thinking there might be some system redundancy and/or you're interested in that aspect.
P5's ecosystem and its tools are generally mature and limit spofs. This is unlike the ecosystems and tools of most of the other languages out there; so if you've gotten used to the remarkable reliability/availability of CPAN due to its avoidance of spofs, and by extension perlbrew, well, you've been spoiled by P5.
The P6 ecosystem/tool combinations are evolving in the P5 tradition.
For example, the zef package manager automatically connects to CPAN alongside github, and is built to be able to connect to other repos. The ecosystem is partway there to take advantage of this zef capability in that many modules are redundantly hosted on both CPAN and github.
rakudobrew ignores CPAN and assumes use of git repos. It is impressively configurable via its Variables.pm file which includes a %git_repos variable, which could be re-targeted to an alternative git repo site like gitlab. But no one has, to my knowledge, arranged to redundantly copy and update the relevant rakudo files to another git repo site, so this spof-avoidance ability apparently inherent in rakudobrew's code is, afaik, moot for now.

chroot with SSH and SFTP

I'm stuck on something quite complicated it seems, but I'm pretty sure I'm not the first one to face this problem, still I can't seem to find someone having the same problem on any forum.
As said in the title I want to make a chroot for users that works with SSH and SFTP. I'm currently stuck with one or the other and that's not ok with me.
Following tutorials, I modified the sshd_config file and added this line as suggested:
ForceCommand internal-sftp
That allows me to connect when using a linux terminal but it's a bit tricky for windows user using putty even if it seems you can use it with psftp. But you can't use all the commands you put inside the jail env.
Does anyone has already figure this one out?
As stated before my chroot is working, so it's not really a configuration issue.
Finally I found a solution reading another tutorial.
If anyone is troubled with this another time, in sshd_config, most tutorial juste leave the subsystem line in the file by default.
To resolve my issue I just did this:
#Subsystem sftp /usr/lib/openssh/sftp-server
Subsystem sftp internal-sftp
That did the work and I can now access my server in SSH and SFTP with the chrooted accounts.
It's always just after you post on a forum that you find a solution.
Have a good day everyone.

can a logrotate config point to a directory

Primary question: Is it a valid option in a logrotate config to have the targeted log be a directory?
I have seen several examples using the wildcard notation of /var/log/example/*.log, but I am curious if the option of /var/log/example will give a similar result.
I checked the man pages, but only found examples with the wildcard syntax. This led me to believe that was the only way. However, there is a sentence in the doc which reads, "The last section defines the parameters for all of the files in /var/log/news" in reference to an example without a wildcard, which caused me to question.
Background of question:
In dealing with the logrotate recipe (from the apache2 community cookbook) which is intended to setup logrotation for the httpd install, the created config points to a directory. logrotate -d shows that an operation is being performed (I think), but none of the files contained in the directory are being rotated on logrotate -f. Since obviously I am unfamiliar with logrotate I was hoping someone could enlighten me (before I log a ticket / pull request).
Do note that the example they use shows /var/log/news/*, so I'm pretty sure /var/log/news/ will not do anything for you. As I've noted in another thread, you can run logrotate with the '-d' flag and it will not perform any rotations, but will show you what it WOULD do for a particular configuration file.

Launch webserver with no configuration file

I really like the concept of firing up an HTTP daemon to serve something like the current working directory in the file system without the distraction of configuring modules, directories, permissions etc. (instant gratification for programmers). My example use-cases are:
I may be trying to prototype some RESTful web services with a new UI framework, or
provide a tutorial for users to use some UI framework with a realistic but minimal end-to-end sample code.
experimenting with making an SVN or Git repository available over HTTP (no lectures about security or alternative protocols please)
making my personal files (photos, documents,...) available temporarily over HTTP while I am out of town (particularly abroad where all I would have is a plugin-less browser at an internet cafe)
Here's one I found from another thread:
python -m SimpleHTTPServer 8000
Is there an equivalent, ideally, with Apache httpd? Lighttpd is a candidate too but once you create prerequisites you lose adopters of the technology you are trying to teach (or learn yourself). The best tutorials are one liners you can copy and paste to execute, then figure out how it works after seeing it in action.
I'm guessing the answer is no, not directly BUT you can use a heredoc in place of your httpd.conf file? It would be nicer if the popular binaries had direct command line arguments.
This runs lighttpd in the foreground on port 8080, serving files from /www. Ctrl-C to cause lighttpd to exit:
printf 'server.document-root="/www" \n server.port=8080' | lighttpd -D -f -

exim configuration - accept all mail

I've just set up exim on my ubuntu computer. At the moment it will only accept email for accounts that exist on that computer but I would like it to accept all email (just because I'm interested). Unfortunately there seem to be a million exim related config files, and I'm not having much success finding anything on google.
Is there an introduction to exim for complete beginners?
Thanks.
There's a mailing list at http://www.exim.org/maillist.html. The problem you will face as an Ubuntu user is that there's always been a slight tension between Debian packagers/users and the main Exim user base because Debian chose to heavily customize their configuration. Their reasons for customizing it are sound, but it results in Debian users showing up on the main mailing list asking questions using terms that aren't recognizable to non-Debian users. Debian runs its own exim-dedicated help list (I don't have the address handy, but it's in the distro docs). Unfortunately this ends up causing you a problem because Ubuntu adopted all these packages from Debian, but doesn't support them in the same way as Debian does, and Debian packagers seem to feel put upon to be asked to support these Ubuntu users.
So, Ubuntu user goes to main Exim list and is told to ask their packager for help. So they go to the Debian lists and ask for help and may or may not be helped.
Now, to answer your original question, there are a ton of ways to do what you ask, and probably the best way for you is going to be specific to the Debian/Ubuntu configurations. However, to get you started, you could add something like this to your routers:
catchall:
driver = redirect
domains = +local_domains
data = youraddress#example.com
If you place that after your general alias/local delivery routers and before any forced-failure routers, that will redirect all mail to any unhandled local_part at any domain in local_domains to youraddress#example.com.
local_domain is a domain list defined in the standard exim config file. If you don't have it or an equivalent, you can replace it with a colon-delimited list of local domains, like "example.com:example.net:example.foo"
One of the reasons it's hard to get up to speed with Exim is that you can literally do anything with it (literally, someone on the list proved the expansion syntax is turing complete a few years ago, IIRC). So, for instance, you could use the above framework to look the domains up out of a file, to apply regular expressions against the local_parts to catch, save the mail to a file instead of redirecting to an address, put it in front of the routers and use "unseen" to save copies of all mail, etc. If you really want to administer an Exim install, I strongly recommend reading the documentation from cover to cover, it's really, really good once you get a toe hold.
Good luck!