Is it possible to setup ldap with external databases? [closed] - ldap

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Suppose I have an existing ldap and I want to integrate users from one or more existing external databases under a dn called
dn: ou=users,dc=example,dc=com
Is that possible?
EDIT:
Maybe I was a bit to vague:
I have external databases containing users which have to be integrated into ldap. I want to do this without having to add them to the ldap database.

I'm not sure what you mean by "integrating users" there. Is what you're trying to do something like this?
ldapsearch -h my.ldap.server -b ou=users,dc=example,dc=com "cn=somebody"
…where my.ldap.server is the LDAP server your applications are talking to, but the data you're seeking is on some other server under the naming context ou=users,dc=example,dc=com. And, you want my.ldap.server to interface with that server and bring the data? Transparent to your apps?
If that's the case, you can use an LDAP proxy which could relay the requests based on context rules. It can act as the single data source, providing a layer of abstraction between your LDAP clients and LDAP servers which may host different types of data.
Alternatively, you can use a virtual directory server product that can also act as a single data source. Virtual directory servers usually provide more features including support for multiple protocols, not just LDAP. They can also act as bridges which can interface with relational databases.
The first solution, LDAP Proxy, is usually quite sufficient if you are trying to virtualize only LDAP servers.

Related

Is VirtualHost a good pattern in RabbitMQ? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have 100 clients. Each client has unique username, password and two channels (users can't connect to different channels apart from their own). Should I create VirtualHost for each user?
How to write proper user permission to the below situation?:
my_user can connect only to vahost called user_vhost using username and password
my_user can consume only from the user_channel channel
my_user can publish only to the user_channel channel
my_user can connect remotely
Thank You!
Virutal host in RabbitMQ is more like a logical container where a user connected to a particular virtual host cannot access any resource (exchange, queue...) from another virtual host. I always think about it like a administrative domain kind of thing.
Based on what you have explained, I think having a virtual host per user is a good way to keep things simple and clean. Also, this way you do not need to come up with complicated permissions rules, just grant permissions based on virtual host.

I'm thinking of blocking access to every part of my site other than these (SSH/HTTP). Is this a good idea? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I think this should be standard for everybody to do anyway, but maybe I'm missing something.
I want to block access to my site through every port/method/protocol except a select few methods:
This includes blocking use of the IP address rather than the domain name. So visits to 123.55.123.66 and ssh://123.55.123.66 will always fail.
Also, blocking all FTP access
These only will be allowed:
(1) http://domain.com
(2) https://domain.com
(3) ssh://ssh-access.domain.com
So SSH is only available at this subdomain, so people can't hit SSH from the IP or the same domain that is publicly available.
Also, http://ssh-access.domain.com would fail.
No access to FTP, Telnet anything.
Is this a good idea?
Because I can't even think of all the different ports/protcols available, I think it's best to block all except the above listed (rather than block all FTP, SSH etc.).
Also, if anyone has any pointers as to how I would code this, that would be great. I'm guessing it's best to do it in Apache (or Ubuntu).
You cannot "visit" ssh://123.55.123.66 in the proper sense (i.e. with a web browser) and, although some file browsers offer this extension, Apache is not involved in the connection (instead, the SSH daemon is). Moreover, SSH daemon has no notion of "(sub)domain".
That said, you can configure SSH daemon to listen only on the "remote access" IP address (bind it to that address).
For the website, you can adapt the appropriate Mod-Security rules to deny access to people/bots trying to access the website by IP address, rather than by web address.

OpenVPN access control [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Using OpenVPN, I can enable 2-way authentication with certificates, private keys and a CA-certificate.
In my understanding, this only provides authentication (the client is, who he says he is) but not authorization (access control). OpenVPN just assumes that a valid authentication is also an access authorization.
If I now run a second VPN server, using the same CA, will the clients of the first also have access to the second VPN?
If I want to avoid this - clients with keys/certs for the first VPN server should not be able to access the second VPN server (and reverse) - what are my options?
use a different CA for each server (ugly in my opinion)
use an access control list based on the common name (CN) (not so practical)
use firewall / iptables (not so practical)
Am I missing a way to somehow limit access of a certain client to a certain server?
Citing Jan Just Keijser from the OpenVPN forum
openvpn provides authentication, not access control (authorization), nor should it, in my opinion. The options you mention are the only options you have, unless you also want to throw in username+password control.
you could use a sub-CA (intermediary CA) ; each client cert would be signed by a specific sub-CA ; the clients need only the "root" CA to connect to a server, but the servers can allow access based on the sub-CA used for a client.

Using SSH Keys for http user verification [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
This is most likely not possible, but I'm just asking this to check... I'm just thinking out loud here...
So, SSH keys are very useful for logging into a server. Use SSH, and your SSH keys increase security to the server while also making it easier to manage several servers or other programs with one SSH key. Also, by unlocking the key when you login, there is even less need to type the password over and over again. So, I was wondering... is there a way to use SSH keys for website user verification? I am not talking about large, public websites, but about small, controlled systems that are used by specific users whose OS/browser can be controlled. Is there a way to integrate this? For this to work I assume the private key would need to be transferred over the web, so let's say we have SSL running to make this not insecure. Is such a thing possible? In an ideal situation, I log in to a website and it will see that my private key matches the public key that is installed, and voilá, I'm in!
It's called a client certificate, and you import it into your browser.
From technical point of view SSH keys represent public key cryptography scheme, and this is what X.509 certificates do in SSL. So what you need is certificates (client-side one if you want to authenticate the client on the server).
And no, private keys are never transferred across the net. They are used in certain operations for exchange of session key.
AuthUserFile /home/hafizni/.htpasswd
AuthGroupFile /home/hafizin/.htgroup
AuthName hafizin page
AuthType Basic
require group my-users

Does nginx proxy handle well on SEESSION ID? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
For example,
I have a nginx server as a front-end and two apache servers with mod_php.
As you know, php has sesssion support, which set a cookie identifying the SESSION ID but the real data is stored at the server.
When a user is set with this kind of cookie by one apache server, will his other HTTP requests be fowarded to the same apache server before the session/cookie expires ?
Out of the box, no, the requests will not necessarily be forwarded to the same server, so your application using sessions will be broken.
Go to your favorite search engine and type "nginx affinity" and "nginx sticky" for solutions.
Yes, it will do that if you follow the documentation for multiple back-end servers usage:
http://wiki.nginx.org/HttpUpstreamModule
But better consider storing sessions in a some sort of shared storage, e.g. Memcached or a database.