My OSCommerce site includes a separately programmed feature for which I use SQL tables. I've decided to host its tables on a remote site offering free SQL accounts. I'd like to know if there could be any disadvantages to this approach.
Thanks
Syd
Disadvantages might include the longer time it will take to run the script since it has to make a connection over the network and the need to make sure that the database connection is made securely -- that the password for the database login isn't passed in clear text & that the permissions on the receiving end of the connection are set to allow connections from only that IP. Of course you'll also want to make sure that the free hosting company provides adequate security for the database itself -- "free" doesn't alway pay for the best set up or the most knowledgeable technicians...
You should connect to your MySQL database using MySQL's built-in SSL ability. This insures that all data transfered is highly protected. You should create self-signed x509 certificates and hard code them. This is free, and you don't need a CA like Verisign for this. If there is a certificate exception then there is a MITM and thus this stops you from spilling the password.
Another option is a VPN, and this is better suited if you have multiple daemons that require secure point to point connections.
I am assuming you are hosting the OSCommerce database on the same server as the webserver and your solution only allocates one database per customer. You can use the add-on tables in the same database as the regular OSCommerce tables as long as you prefix them with some prefix so that they won't have a namespace conflicts. If the code to the third party solution is any good, it won't be too hard to configure a table prefix so that the code will know what the new names for the tables are. This solves any potential latency problem and keeps the control in your hands. I use this trick to host multiple wordpress blogs in the same database.
Related
Reason: We have a new client that wishes for the database containing all their info to be stored on their own personal database server. However the web server will be located at another location.
Question How can you secure the data from the time it is inputed until the time an external database saves it?
Through some reading it seems that SSL will only cover so much and that some sort of a secure connection must be set up between the two. Or does the SSL cover this connection as well? It somewhat seems that it should.
SSL provides a reasonable solution to transport security (keeping the data safe from prying eyes as it goes over the wire).
Lock down the endpoints between the two systems as far as practical. For example, in addition to encryption, our firewall blocks physical access to the database except from well-known IP addresses.
You still need to ensure that your web server is secure (since the data is available unencrypted there), and that their database server is secure (including encryption of sensitive data when stored in database tables).
For those in the know, what recommendations do you have for storing passwords in Windows Azure configuration file (which is accessed via RoleManager)? It's important that:
1) Developers should be able to connect to all production databases while testing on their own local box, which means using the same configuration file,
2) Being Developers need the same configuration file (or very similar) as what is deployed, passwords should not be legible.
I understand that even if passwords in the configuration were not legible Developers can still debug/watch to grab the connection strings, and while this is not desirable it is at least acceptable. What is not acceptable is people being able to read these files and grab connection strings (or other locations that require passwords).
Best recommendations?
Thanks,
Aaron
Hum, devs are not supposed to have access to production databases in the first place. That's inherently non-secure, no matter if it's on Azure or somewhere else. Performing live debugging against a production database is a risky business, as a simple mistake is likely to trash your whole production. Instead I would suggest to duplicate the production data (eventually as an overnight process), and let the devs work against a non-prod copy.
I think it may be solved partially by a kind of credentials storage service.
I mean a kind of service that do not need a passwords, but allows access only for machines and SSPI-authenticated users which are white-listed.
This service can be a simple WebAPI hosted under SSLed server, with simple principles like so:
0) secured pieces have a kind of ACL with IP whitelist, or machine name-based, or certificate-based whitelist per named resource, or mixed.
1) all changes to stored data are made only via RDP access or SSH to the server hosting the service.
2) the secured pieces of information are accessed only via SSL and this API is read-only.
3) client must pre-confirm own permissons and obtain a temporary token with a call to api like
https://s.product.com/
3) client must provide a certificate and machine identity must match with the logical whitelist data for resource on each call.
4) requesting of data looks like so:
Url: https://s.product.com/resource-name
Header: X-Ticket: value obtained at step 3, until it expire,
Certificate: same certificate as it used for step 3.
So, instead of username and password, it is possible it store alias for such secured resource in connection string, and in code this alias is replaced by real username-password, obtained from step 4, in a Sql connection factory. Alias can be specified as username in special format like obscured#s.product.com/product1/dev/resource-name
Dev and prod instances can have different credentials aliases, like product1.dev/resource1 and product1/staging/resource1 and so on.
So, only by debugging prod server, sniffing its traffic, or by embedding a logging - emailing code at compilation time it is possible to know production credentials for actual secured resource.
There are times we need to create an ODBC connection over the "tubes" to one of our customer sites. We would like to provide as much security as possible to our customers, given we are using ODBC and, well...
Anyway, there is a checkbox setting in the SQL Server DSN that says "Use strong encryption for data", but absolutely no documentation for it. The only references I can find on the Google nets are unanswered questions -- not very encouraging. Does anybody have a clue what it does or how it works? If that isn't a way to encrypt the data stream, is there another way?
BTW, we cannot rely on our customers to force encryption from their end, and dealing with security certificates would be a real nightmare.
Thanks in advance,
Dave
Is it SQL 2000 or 2005/2008?
The encryption enforcement can be requested by the client or enforced by the server. The encryption is based on Schannel protocol (ssl) and as such requires an valid certificate deployed on the server and trusted by the client, there is no way out of that. The certificate has to be signed by an authority that is trusted by the client and, amongst other typical server certificate requirements, must have the FQDN name used to connect by the client as its subject.
In SQL 2005 How to: Enable Encrypted Connections
In SQL 2000 Configure the Server and Request encryption by client
There is no reason you can't have a secure connection while using ODBC. Basically, the responsibility for over-the-wire security would fall under the ODBC driver (basically the database-specific part). If the driver doesn't already provide for this (SQL Server may or may not - I don't know what "Use strong encryption for data" applies to) you can probably add your own. One possibility would be to create a SSH bridge, e.g. using ssh -L. I don't know if this counts as a "nightmare", but it would probably be an effective and fairly simple technique.
i have following scenario and can't seem to find anything on the net, or maybe i am looking for the wrong thing:
i am working on a webbased data storage system. there are different users and different places and only certain users are allowed to access certain parts of the system. now, we do not want them to connect to these parts from at home or with a different computer than they are using at their work-place (there are different reasons for that).
now my question is: if there is a way to have the work-place-pc identify itself to the server in some way over the browser, how can i do that?
oh and yes, it is supposed to be webbased.
i hope i explained it so everyone understands.
thnx for your replies in advance.
... dg
I agree with Lenni... IP address is a possible solution if they are static or the DHCP server consistently assigns the same IP address to the same machine.
Alternatively, you might also consider authentication via "personal certificates" ... that's what they are referred to in Firefox, don't know it that's the standard name or not. (Obviously I haven't worked with these before.)
Basically they are SSL or PKI certificates that are installed on the client (user's) machine that identify that machine as being the machine it says it is -- that is, if the user tries to connect from a machine that doesn't have a certificate or doesn't have a certificate that you allow, you would deny them.
I don't know the issues around this ... it might be relatively easy for the same user to take the certificate off one computer and install it on another one with the correct password (i.e. it authenticates the user), or it might be keyed specifically to that machine somehow (i.e. it authenticates the machine). And a quick google search didn't turn up any obvious "how to" instructions on how it all works, but it might be worth looking into.
---Lawrence
Since you're going web based you can:
Examine the remote host's IP Address (compare it against known internal subnets, etc)
During the authentication process, you can ping the remote IP and take a look at the TTL on the returned packets, if it's too low, then the computer can't be from the local network. (of course this can be broken, but it's just 1 more thing)
If you're doing it over IIS, then you can integrate into SSO (probably the best if you can do it)
If it's supposed to be web-based (and by that I mean that the web server should be able to uniquely identify the user's machine), then you choices are limited: per se, there's nothing you can obtain from the browser's headers or request body that allows you to identify the machine. I suppose this is by design, due to the obvious privacy implications.
There are choices though, none of which pain-free: you could use an ActiveX control, which however only runs on Windows (and not on all browsers I think) and requires elevated privileges. You could think of a Firefox plug-in (obviously Firefox only). At any rate, a plain-vanilla browser will otherwise escape identification.
There are only a few of REAL solutions to this. Here are a couple:
Use domain authentication, and disallow users who are connecting over a VPN.
Use known IP ranges to allow or disallow access.
IP address. Not bombproof security but a start.
How can I work with Novell eDirectory services in J2SE? Will JNDI work with eDirectory? What are some resources I can use to learn about whatever library or libraries you suggest?
I just want to play around with retrieving information via LDAP for right now, and if I get things working the way I want, I will probably need to be able to modify objects later on.
Thanks!
JNDI should work with eDirectory.....
try; http://developer.novell.com/wiki/index.php/Jldap and http://developer.novell.com/wiki/index.php/Novell_LDAP_Extended_Library
Used it successfully with OpenLDAP and should suffice for eDirectory as well.
Any LDAP interface you want to use should work fine against eDirectory.
Be aware that the configuration of the LDAP server may not allow clear text passwords, thus a bind to port 636 via SSL (Where you have the certificate imported into the keystore already) or via TLS (retrieve the tree CA's public key on the fly).
If you have administrative access to the eDirectory server, you can easily change that, but still best to confirm that you can get it to work over SSL/TLS (aka LDAPS).
If you really need it, you can ask the admins for a server with only a replica of some test partition (and thus no real user data in its view) and test via cleartext against that.
It is very easy in eDirectory to add a new replica of a partition, carve off or merge a partition, and all can be done live.
It is similarly very easy to host replicas of many partitions on one server. (The official limit is, no limit on the number or partitions in a tree, or replicas on a server, but it used to be 256 in older versions (before 8.x) )
If you are allowed access to the eDirectory server, you want to to ask for access to Dstrace (several versions of this, see Many Faces of Dstrace). There is a web interface (server:8008 on Netware, 8010 on Windows, 8028 on Unix/Linux usually) or other interfaces. If you enable the LDAP trace option (and turn off all the others) you can fairly completely debug what is going on at the server side. See the errors, the communication, or lack thereof and so on.