Synchronize LDAP servers with DSML format - ldap

Is it possible to synchronize two LDAP servers with the DSML protocol instead of ldif?
We have a firewall between our 2 servers and the only authorized protocol/payload is http over xml.
We had a look at a DSML gateway, but it does not seem to be well suited for our need here

Everything is possible, it's software. DSML is more of a representation of LDAP in XML, and it's therefore possible to send DSML queries to a server to replay some operations that happened in the first one. But I don't know of any server that can log or dump the changes they receive into DSML queries to replay later.

Related

Can I use kafka over Internet?

Is kafka suitable for Internet-use?
More precisely, what I want is to expose kafka topics as "public interface", then external consumers (or producers) can connect to it. Is it possible?
I hear there are problems if I want to use the cluster in both internal and external networks, because it is then hard to configure advertised.host.name. Is that true?
And do I have to expose zookeeper as well? I think the new consumer/producer api no longer need that.
Kafka's wire protocol is TCP-based and works fine over the public internet. In the latest versions of Kafka you can configure multiple interfaces for both internal and external traffic. Examples of Kafka over the internet in production include several Kafka-as-a-Service offerings from Heroku, IBM MessageHub, and Confluent Cloud.
You do not need to expose zookeeper if the Kafka clients use the new consumer API.
You may also choose to expose a REST Proxy such as the open source Confluent REST Proxy as a more client firewall friendly interface since it runs over HTTP(S) and will not be blocked by most corporate or personal firewalls.
I would personally not expose the Kafka server directly to clients via TCP for these reasons, only to name a few:
If a bad client opens too many connections this may affect the stability of the Kafka platform and may affects other clients too
Too many open files on the Kafka server, HW/SW settings and OS tuning is needed to limit uncontrolled clients
If you need to add a Kafka server to increase scalability, you may need to go through a lot of low level configuration (firewall, IPs visibility, certificates, etc.) on both client and server side. Other product address these problems using gateways or proxies: Coherence uses extend proxy clients, tibco EMS uses routed destinations, other SW (many JMS servers) use Store&Forward mechanisms, etc.
Maintenance of the Kafka nodes, in case of clients attached to the Kafka servers, will have to consider also the needs of clients and the SLA (service level aggreement) that have been defined with the client (ex. 24*7*365)
If you use Kafka also as a back end service, a multi layered architecture should be taken into consideration: FE gateways and BE services, etc.
Other considerations require to understand what exacly you consider to be an external (over the internet) consumer/producer in your system. Is it a component of your system that needs to access the Kafka servers? Are they internal or external to your organization, etc.
...
Naturally all these considerations can be correctly addressed also using a TCP direct connection to the Kafka servers, but I would personally use a different solution.
HTTP proxies
Or at least I would use a dedicated FE Kafka server (or couple of servers for HA) dedicated for each client that forward the messages to the main Kafka group of servers
It is possible to expose Kafka over the internet (in fact, that's how managed Kafka providers such as Aiven and Instaclustr make their money) but you have to ensure that it is adequately secured. At minimum:
ZooKeeper nodes should reside in a private subnet and not be routable from outside. ZK's security is inadequate and, at any rate, it is no longer required to bootstrap Kafka clients with ZK address(es).
Limit access to the brokers at the network level. If all your clients connect from a trusted network, then set appropriate firewall rules. If in AWS, use VPC peering or Direct Connect if you are connecting cloud-to-cloud or cloud-to-ground. If most of your clients are on a trusted network but a relative minority are not, force the latter to go via a VPN tunnel. Finally, if you want to allow connectivity from arbitrary locations, you'll just have to allow * on port 9092 (or whichever port you configure the brokers to listen on); just make sure that the other ports are closed.
Enable TLS (SSL) for client-broker connections. This is easily configured with a self-signed CA. Depending on how you expose your listeners, you may need to disable SSL hostname verification on the client. (The certificate chain of trust breaks if the advertised host names don't match the certificate's common name.) The clients will need the CA certificate installed. (Same CA that signed the brokers' certs.)
Optionally, you may enable mutual TLS authentication; however, this is logistically more taxing, as it requires each client to have its own private key that is signed by a CA trusted by the broker.
Use SASL to authenticate the client to the broker and create individual users for each application and each person that is expected to access the cluster.
Issue minimally-sufficient cluster- and topic-level access privileges in the ACLs for each user, following the Principle of Least Privilege (PoLP).
One other thing to bear in mind: Not all tooling supports SASL/SSL connectivity and some tools actually require a connection to ZooKeeper nodes (which will not be reachable in the above setup). Make sure any tooling you rely on uses the 'new' style of connectivity directly to the Kafka brokers and does not require a Zookeeper connection.
Beyond configuring client TLS, brokers have to have public IPs which we try to avoid. Normally for other services we hide everything behind load balancers. Would this be possible with kafka?
I'm not sure the Confluent REST proxy hosted on a public server is a real option when you need the high performance batching of the java producer client.

Load balancing: when should a user always connect to the same server?

I came across the "source" load balancing algorithm in HAProxy, which ensures that a user will connect to the same server, by choosing server based on a hash of the source IP.
Why and when is it important for a user to connect to the same server? I cannot think of a reason, assuming that all candidate servers serve identical content.
Furthermore, if there was the need for a user to always connect to the same server, then wouldn't load balancing be completely irrelevant for this user?
It is important for a user to connect to the same server if we want to achieve session persistence.
For example, when talking about a HTTP session, there are information/variables (think about a shopping cart) specific to the session in question.
This dynamic information is not shared by the candidate servers in case they are not configured to do so and it is simpler to deal with it at the load-balancing level.
The preferred way to deal with this in HAProxy is by using cookies, but this only works in HTTP mode. HAProxy offers the source load balancing algorithm, in case cookies can't be used. This can be used in TCP mode or with HTTP clients that refuse cookies.
Load balancing will be irrelevant for the user in question right, until the cookie expires. But we generally need load balancing when dealing with many users so that they can be served by multiple servers with each user sticking with one of them.

SQL Server SSL Encryption

I have an IIS hosted WCF Service that uses SSL encryption. This service makes requests to a SQL Server 2014 database instance. When I make a call to the service the response message is encrypted. So, the connection between the client (browser) and the service is secure. I also want the connection between the service and the SQL Server 2014 database to be secure.
This is where my question comes in. I am not exactly sure how to do this. I read the following article Enable Encrypted Connections to the Database Engine and I was able to successfully add the certificate to the SQL server database engine and changed the Force Encryption flag to True. But now I am a bit confused as to whether I want to configure the server to accept encrypted connections or the client to request encrypted connections. Based on the scenario I presented above it seems I want the client to request encrypted connections from the SQL Server DB correct? I guess one reason I am confused is because this is ALL happening on my development machine. SQL Server is being hosted there as is the IIS Service.
If someone with experience could maybe clarify that for me I would greatly appreciate it.
If you haven't restarted the service, then do so to complete the configuration change. It sounds like you applied the change correctly and using a domain or public CA certificate will prevent a man in the middle attack. To verify that the connections are secure, you can use a DMV named sys.dm_exec_connection which should display true for the encrypt_option for all sessions, as below:
select session_id, net_transport, encrypt_option from sys.dm_exec_connections
I'm not certain that connections from the host to SQL Server will be encrypted by SSL\TLS since they would be using the shared memory protocol and Windows manages the security of shared memory.

client server communication

I am publically distributing an application which can be installed on users PC. Client will periodically communicate with the server to send information from the client. Server have to acknowledge the successful receipt of the information. Occasionally, server will do an one way communication with the client. My question is what is the best/failproof/recommended way to do client-server communication when client is massively distributed? I am currently focusing on self-hosted service to do the communication. What precaution should i take if the clients ip address change frequently?
My suggestions are:
Use HTTP or HTTPS on default ports. By massively I understand you will have no control over the network restrictions, firewalls, NAT traversal, etc. Using HTTP(S) and initiating the connections from the clients with simple web requests will save you a lot of trouble.
Use polling at regular/smart intervals to solve your occasional server initiated data transfer. Clients running on workstations wont have a public IP address, let alone a fixed one.

Securly transferring data from server to external database

Reason: We have a new client that wishes for the database containing all their info to be stored on their own personal database server. However the web server will be located at another location.
Question How can you secure the data from the time it is inputed until the time an external database saves it?
Through some reading it seems that SSL will only cover so much and that some sort of a secure connection must be set up between the two. Or does the SSL cover this connection as well? It somewhat seems that it should.
SSL provides a reasonable solution to transport security (keeping the data safe from prying eyes as it goes over the wire).
Lock down the endpoints between the two systems as far as practical. For example, in addition to encryption, our firewall blocks physical access to the database except from well-known IP addresses.
You still need to ensure that your web server is secure (since the data is available unencrypted there), and that their database server is secure (including encryption of sensitive data when stored in database tables).