I want to create and configure SSLContext object and then make mysql.jdbc.Driver use it for establishing secure connection. Is there an approach for it better then custom jdbc.Driver?
You can create a custom com.mysql.jdbc.SocketFactory class that creates SSLSockets using an SSLSocketFactory coming from this SSLContext. Then, you can pass that class name to the MySQL JDBC connector using the socketFactory property (see table in the documentation).
This needs to have a constructor with no parameters, but its Socket connect(String host, Properties props) method should get the JDBC properties via its props parameter (if you need).
Note that you should not only check the validity of your certificate, but also check that the host name matches. If you're using Java 7, this can be done like this before returning the SSLSocket you've just created:
SSLParameters sslParams = new SSLParameters();
sslParams.setEndpointIdentificationAlgorithm("HTTPS");
sslSocket.setSSLParameters(sslParams);
(The host name matching rules for HTTPS should be sufficiently sensible for most protocols, including MySQL.)
Related
When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);
I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);
I have about 85 SSIS packages that are using the same connection manager.
I understand that each package has its own connection manager.
I am trying to decide what would be the best configurations approach to simply set the connectionstring of the connection manager based on the server the packages are residing on.
I have visited all kinds of suggestions online, but cannot find anywhere the practice where I can simply copy the configuration from one package to the rest of the packages.
There are obviously many approaches such as XML file, SQL Server, Environment Variable, etc.
All the articles out there are pointing to use an Indirect method by using XML or SQL approach. Why would using an environment variable for just holding a connection string is such a bad approach?
Any suggestions are highly appreciated.
Thanks!
Why would using an environment variable for just holding a connection string is such a bad approach?
I find the environment variable or registry key configuration approach to be severely limited by the fact that it can only configure one item at a time. For a connection string, you'd need to define an environment variable for each catalog on a given server. Maybe it's only 2 or 3 and that's manageable. We had a good 30+ per database instance and we had multi-instanced machines so you can see how quickly this problem explodes into a maintenance nightmare. Contrast that with a table or xml based approach which can hold multiple configuration items for a given configuration key.
...best configurations approach to simply set the connectionstring of the connection manager based on the server the packages are residing on.
If you go this route, I'd propose creating a variable, ConnectionString and using it to configure the property. It's an extra step but again I find it's easier to debug a complex expression on a variable versus a complex expression on a property. With a variable, you can always pop a breakpoint on the package and look at the locals window to see the current value.
After creating a variable named ConnectionString, I right click on it, select Properties and set EvaluateAsExpression equal to True and the Expression property to something like "Data Source="+ #[System::MachineName] +"\\DEV2012;Initial Catalog=FOO;Provider=SQLNCLI11.1;Integrated Security=SSPI;"
When that is evaluated, it'd fill in the current machine's name (DEVSQLA) and I'd have a valid OLE DB connection string that connects to a named instance DEV2012.
Data Source=DEVSQLA\DEV2012;Initial Catalog=FOO;Provider=SQLNCLI11.1;Integrated Security=SSPI;
If you have more complex configuration needs than just the one variable, then I could see you using this to configure a connection manager to a sql table that holds the full repository of all the configuration keys and values.
...cannot find anywhere the practice where I can simply copy the configuration from one package to the rest of the packages
I'd go about modifying all 80something packages through a programmatic route. We received a passel of packages from a third party and they had not followed our procedures for configuration and logging. The code wasn't terribly hard and if you describe exactly the types of changes you'd make to solve your need, I'd be happy to toss some code onto this answer. It could be as simple as the following. After calling the function, it will modify a package by adding a sql server configuration on the SSISDB ole connection manager to a table called dbo.sysdtsconfig for a filter named Default.2008.Sales.
string currentPackage = #"C:\Src\Package1.dtsx"
public static void CleanUpPackages(string currentPackage)
{
p = new Package();
p.app.LoadPackage(currentPackage, null);
Configuration c = null;
// Apply configuration Default.2008.Sales
// ConfigurationString => "SSISDB";"[dbo].[sysdtsconfig]";"Default.2008.Sales"
// Name => MyConfiguration
c = p.Configurations.Add();
c.Name = "SalesConfiguration";
c.ConfigurationType = DTSConfigurationType.SqlServer;
c.ConfigurationString = #"""SSISDB"";""[dbo].[sysdtsconfig]"";""Default.2008.Sales""";
app.SaveToXml(sourcePackage, p, null);
}
Adding a variable in to the packages would not take much more code. Inside the cleanup proc, add code like this to add a new variable into your package that has an expression like the above.
string variableName = string.Empty;
bool readOnly = false;
string nameSpace = "User";
string variableValue = string.Empty;
string literalExpression = string.Empty;
variableName = "ConnectionString";
literalExpression = #"""Data Source=""+ #[System::MachineName] +""\\DEV2012;Initial Catalog=FOO;Provider=SQLNCLI11.1;Integrated Security=SSPI;""";
p.Variables.Add(variableName, readOnly, nameSpace, variableValue);
p.Variables[variableName].EvaluateAsExpression = true;
p.Variables[variableName].Expression = literalExpression;
Let me know if I missed anything or you'd like clarification on any points.
We have a situation where we have multiple databases with identical schema, but different data in each. We're creating a single session factory to handle this.
The problem is that we don't know which database we'll connect to until runtime, when we can provide that. But on startup to get the factory build, we need to connect to a database with that schema. We currently do this by creating the schema in an known location and using that, but we'd like to remove that requirement.
I haven't been able to find a way to create the session factory without specifying a connection. We don't expect to be able to use the OpenSession method with no parameters, and that's ok.
Any ideas?
Thanks
Andy
Either implement your own IConnectionProvider or pass your own connection to ISessionFactory.OpenSession(IDbConnection) (but read the method's comments about connection tracking)
The solution we came up with was to create a class which manages this for us. The class can use some information in the method call to do some routing logic to figure out where the database is, and then call OpenSession passing the connection string.
You could also use the great NuGet package from brady gaster for this. I made my own implementation from his NHQS package and it works very well.
You can find it here:
http://www.bradygaster.com/Tags/nhqs
good luck!
Came across this and thought Id add my solution for future readers which is basically what Mauricio Scheffer has suggested which encapsulates the 'switching' of CS and provides single point of management (I like this better than having to pass into each session call, less to 'miss' and go wrong).
I obtain the connecitonstring during authentication of the client and set on the context then, using the following IConnectinProvider implementation, set that value for the CS whenever a session is opened:
/// <summary>
/// Provides ability to switch connection strings of an NHibernate Session Factory (use same factory for multiple, dynamically specified, database connections)
/// </summary>
public class DynamicDriverConnectionProvider : DriverConnectionProvider, IConnectionProvider
{
protected override string ConnectionString
{
get
{
var cxnObj = IsWebContext ?
HttpContext.Current.Items["RequestConnectionString"]:
System.Runtime.Remoting.Messaging.CallContext.GetData("RequestConnectionString");
if (cxnObj != null)
return cxnObj.ToString();
//catch on app startup when there is not request connection string yet set
return base.ConnectionString;
}
}
private static bool IsWebContext
{
get { return (HttpContext.Current != null); }
}
}
Then wire it in during NHConfig:
var configuration = Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2005
.Provider<DynamicDriverConnectionProvider>() //Like so
In SQL Server 2005, is there a way to specify more than one connection string from within a .NET Application, with one being a primary preferred connection, but if not available it defaults to trying the other connection (which may be going to a diff DB / server etc)?
If nothing along those exact lines, is there anything we can use, without resorting to writing some kind of round-robin code to check connections?
Thanks.
We would typically use composition on our SqlConnection objects to check for this. All data access is done via backend classes, and we specify multiple servers within the web/app.config. (Forgive any errors, I am actually writing this out by hand)
It would look something like this:
class MyComponent
{
private SqlConnection connection;
....
public void CheckServers()
{
// Cycle through servers in configuration files, finding one that is usable
// When one is found assign the connection string to the SqlConnection
// a simple but resource intensive way of checking for connectivity, is by attempting to run
// a small query and checking the return value
}
public void Open()
{
connection.Open();
}
public ConnectionState State
{
get {return connection.State;}
set {connection.State = value;}
}
// Use this method to return the selected connection string
public string SelectedConnectionString
{
get { return connection.ConnectionString; }
}
//and so on
}
This example includes no error checking or error logging, make sure you add that, so the object can optionally report which connections failed and why.
Assuming that you'd want to access the same set of data, then you'd use clustering or mirroring to provide high availability.
SQLNCLI provider supports SQL Server database mirroring
Provider=SQLNCLI;Data Source=myServer;Failover Partner=myMirrorServer
Clustering just uses the virtual SQL instance name.
Otherwise, I can't quite grasp why you'd want to do this...
Unfortunately there are no FCL methods that do this - you will need to implement this yourself.