I'm trying to configure Datasource on Liberty Server 20.0.0.9.
I'm referring to the following instructions:
https://www.ibm.com/docs/en/was-liberty/base?topic=liberty-configuring-default-data-source
This is my server.xml
Features added:
<feature>jndi-1.0</feature>
<feature>jdbc-4.1</feature>
Library configuration
<library description="My shared library" id="MyLib" name="MyLib">
<fileset dir="${usr.extension.dir}/lib" id="MyLibTAI" includes="MyLibTAI-1.0.4.jar db2jcc-db2jcc4.jar"/>
</library>
Datasource
<dataSource id="defaultDS" jndiName="jdbc/defautlDS">
<jdbcDriver libraryRef="MyLib"/>
<properties.db2.jcc databaseName="DBXV01" serverName="192.33.112.21" portNumber="70080"/>
<containerAuthData user="myuser" password="mypwd"></containerAuthData>
</dataSource>
In my code I'm trying to refer to Datasource in different ways:
#Resource(name="defaultsDS") // I also tried name="jdbc/defaultDS"
or
ds = (DataSource) new InitialContext().lookup("java:comp/picoDS"); // Name not found exception
What I'm missing?
Any help appreciated
Thank you
If you are doing a lookup of the jndiName configured in server.xml (as suggested in the other answer), you need to be aware that it is not using container managed authentication and thus the containerAuthData that you specified in server.xml for the dataSource is not being used. You were on the right track the first time in using a resource reference, and here is how to do that correctly:
#Resource(lookup = "jdbc/defautlDS")
DataSource datasource;
Similar, you can use the Resource annotation to define a name for the resource reference that can be looked up in java:comp,
#Resource(lookup = "jdbc/defautlDS", name = "java:comp/env/picoDS")
DataSource datasource;
after which you can successfully do:
DataSource ds = (DataSource) new InitialContext().lookup("java:comp/env/picoDS");
I added the /env to the resource reference name in the above to better follow standards.
In both of the above, I used defautlDS to match the unusual spelling of the jndiName value from your example, which might itself be a typo. Make sure it is consistent in both places in order for it to work.
By using the resource reference, you are able to get container managed authentication (container managed is the default for resource reference lookups) and so you end up using the user and password from conatinerAuthData of your config. If not using the resource reference, then you get application managed authentication.
I solved, as suggested by Gas in comments, using
ds = (DataSource) new InitialContext().lookup("jdbc/defaultDS");
Related
Was using CacheConfiguration in Ignite until I stuck with issue on how to authenticate.
Because of that I was starting to change the CacheConfiguration to clientCacheConfiguration. However after converting it to CacheConfiguration I started to notice that it
does not able to save into table because it lack of method setIndexedTypes eg.
Before
CacheConfiguration<String, IgniteParRate> cacheCfg = new CacheConfiguration<>();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
cacheCfg.setIndexedTypes(String.class, IgniteParRate.class);
New
ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
//cacheCfg.setIndexedTypes(String.class, IgniteParRate.class); --> this is not provided
I still need the table to be populated so it easier for us to verify ( using Client IDE like DBeaver)
Any way to solve this issue?
If you need to create tables/cache dynamically using the thin-client, you'll need to use the setQueryEntities() method to define the columns available to SQL "manually". (Passing in the classes with annotations is basically a shortcut for defining the query entities.) I'm not sure why setIndexedTypes() isn't available in the thin-client; maybe a question for the developer mailing list.
Alternatively, you can define your caches/tables in advance using a thick client. They'll still be available when using the thin-client.
To add to existing answer, you can also try to use cache templates for that.
https://apacheignite.readme.io/docs/cache-template
Pre-configure templates, use them when creating caches from thin client.
I've been trying to create a new repository on a remote GraphDB server using RDF4J, but I'm having problems.
This runs, but is seemingly not correct
HTTPRepositoryConfig implConfig = new HTTPRepositoryConfig(address);
RepositoryConfig repoConfig = new RepositoryConfig("test", "test", implConfig);
Model m = new
However, based on the info I get from "edit repository" in the workbench, the result doesn't look right. All the values are empty, except for id and title.
This fails
I tried to copy the settings from an existing repository that I created on the workbench, but that failed with:
org.eclipse.rdf4j.repository.config.RepositoryConfigException:
Unsupported repository type: owlim:MonitorRepository
The code for that attempt is inspired by the one found here . Except that the config file is based on an existing repo, as explained above. I also tried to config file provided in the example, but that failed aswell:
org.eclipse.rdf4j.repository.config.RepositoryConfigException:
Unsupported Sail type: graphdb:FreeSail
Anyone got any tips?
UPDATE
As Henriette Harmse correctly pointed out, I should have provided my code, not simply linked to it. That way I might have discovered that I hadn't done a complete copy after all, but changed the important first bits that she points out in her answer. Full code below:
String address = "serveradr";
RemoteRepositoryManager repositoryManager = new RemoteRepositoryManager( address);
repositoryManager.initialize();
// Instantiate a repository graph model
TreeModel graph = new TreeModel();
InputStream config = Rdf4jHelper.class.getResourceAsStream("/repoconf2.ttl");
RDFParser rdfParser = Rio.createParser(RDFFormat.TURTLE);
rdfParser.setRDFHandler(new StatementCollector(graph));
rdfParser.parse(config, RepositoryConfigSchema.NAMESPACE);
config.close();
// Retrieve the repository node as a resource
Resource repositoryNode = graph.filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY).subjects().iterator().next();
// Create a repository configuration object and add it to the repositoryManager
RepositoryConfig repositoryConfig = RepositoryConfig.create(graph, repositoryNode);
It fails on the last line.
ANSWERED #HenrietteHarmse gives the correct method in her answer below. The error is caused by missing dependencies. Instead of using RDF4J directly, I should have used the graphdb-free-runtime.
There are a number of issues here:
(1) RepositoryManager repositoryManager = new LocalRepositoryManager(new File(".")); will create a repository where ever your Java application is running from.
(2) Changing to new LocalRepositoryManager(new File("$GraphDBInstall/data/repositories")) will cause the repository to be created under the control of GraphDB (assuming you have a local GraphDB instance) only if GraphDB is not running. If you start GraphDB after running your program, you will be able to see the repository in GraphDB workbench.
(3) What you need to do is get the repository manager of the remote GraphDB, which can be done with RepositoryManager repositoryManager = RepositoryProvider.getRepositoryManager("http://IPAddressOfGraphDB:7200");.
(4) In the way you have specified the config, you cause the RDF graph config to be lost. The correct way to specify it is:
RepositoryConfig repositoryConfig = RepositoryConfig.create(graph, repositoryNode);
repositoryManager.addRepositoryConfig(repositoryConfig);
(5) A minor issue is that GraphUtil.getUniqueSubject(...) has been deprecated, for which you can use something like the following:
Model model = graph.filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY);
Iterator<Statement> iterator = model.iterator();
if (!iterator.hasNext())
throw new RuntimeException("Oops, no <http://www.openrdf.org/config/repository#> subject found!");
Statement statement = iterator.next();
Resource repositoryNode = statement.getSubject();
EDIT on 20180408:
(5) Or you can use the compact option as #JeenBroekstra suggested in the comments:
Models.subject(
graph.filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY))
.orElseThrow(() -> new RuntimeException("Oops, no <http://www.openrdf.org/config/repository#> subject found!"));
EDIT on 20180409:
For convenience I have added the complete code example here.
EDIT on 20180410:
So the actual culprit turned out to be an incorrect pom.xml. The correct version is as below:
<dependency>
<groupId>com.ontotext.graphdb</groupId>
<artifactId>graphdb-free-runtime</artifactId>
<version>8.4.1</version>
</dependency>
I believe I just had the same issue. I used the example code from GraphDB Free for running with RDF4J as a remote service and ran into the same exception as you (Unsupported Sail type: graphdb:FreeSail). Henriette Harmse's answer does not directly address this issue but one should follow the suggestions given there to avoid running into issues later. In addition, based on a look into the RDF4J code you need the following dependency in your pom.xml file (assuming GraphDB 8.5):
<dependency>
<groupId>com.ontotext.graphdb</groupId>
<artifactId>graphdb-free-runtime</artifactId>
<version>8.5.0</version>
</dependency>
This seems to be because there is some kind of service loading going on with META-INF, which I frankly am not familiar with. Maybe someone can provide more details in the comments. The requirement for adding this dependency in also seems to be absent from the instructions, so if this works for you, please let me know. Others who followed the same steps we did should be able to resolve this issue as well then.
We are using ejb-jar.xml to configure the MDB as mentioned below. We need to access one of the activation config property "subscriptionName" in MDB.
<message-driven>
<ejb-name>InboundListener</ejb-name>
<!--Class whose MDB instance with above name will be created by EJB container-->
<ejb-class>com.xyz.listener.InboundListener</ejb-class>
<activation-config>
<activation-config-property>
<!--The type of JMS resource this instance will be accessing-->
<activation-config-property-name>destinationType</activation-config-property-name>
<activation-config-property-value>javax.jms.Topic</activation-config-property-value>
</activation-config-property>
<activation-config-property>
<activation-config-property-name>subscriptionName</activation-config-property-name>
<activation-config-property-value>OXI145937</activation-config-property-value>
</activation-config-property>
</activation-config>
</message-driven>
How to get the value of subscriptionName property in MDB's onMessage() method.
Appreciate your help and time.
I have a need to fluently configure nhibernate in my S#arp application so that I can use a custom NHibernate.Search directory for each of my tenants in a multi-tenant app.
However I have googled for hours looking for a solution but can't seem to find anything current that works.
Thanks,
Paul
I haven't tried this myself, but AddConfiguration takes a dictionary of cfgProperties, which I guess you can pass the tenant specific hibernate.search.default.indexBase value to.
I had a look at this, adding the key as described above will cause a problem if you attempt to use CfgHelper.LoadConfiguration() since it will return null.
But you can configure NHSearch to use different directories for each factory using the factory key:
<nhs-configuration xmlns="urn:nhs-configuration-1.0">
<search-factory sessionFactoryName="YOUR_TENANT1_FACTORY_KEY">
<property name="hibernate.search.default.indexBase">~\IndexTenant1</property>
</search-factory>
<search-factory sessionFactoryName="YOUR_TENANT2_FACTORY_KEY">
<property name="hibernate.search.default.indexBase">~\Tenant2</property>
</search-factory>
</nhs-configuration>
If you are following instructions on
http://wiki.sharparchitecture.net/Default.aspx?Page=NHibSearch
You would need to change the method GetIndexDirectory to
private string GetIndexDirectory() {
INHSConfigCollection nhsConfigCollection = CfgHelper.LoadConfiguration();
string factoryKey = SessionFactoryAttribute.GetKeyFrom(this); // Change this with however you get the factory key for your tenants,
string property = nhsConfigCollection.GetConfiguration(factoryKey).Properties["hibernate.search.default.indexBase"];
var fi = new FileInfo(property);
return Path.Combine(AppDomain.CurrentDomain.BaseDirectory, fi.Name);
}
I have multiple BIRT reports that obtains the data from the same jdbc data source.
Is it possible to obtain the conection parameters (Driver URL, User Name, and Password) from an external property file or similar?
One you create a functional data source, you can add that data source to a report library that can be imported and used by all BIRT reports in your system. The source inside the library can have static connection attributes, or you can abstract them using externalized properties.
If you want to externalize the connection info, you will need to tweak the Data source itself. Inside the Data Source Editor, there is a "Property Binding" section that allows you to abstract all the values governing the data connection. From there you can bind the values (using the expression editor) to either report parameters or a properties file.
To bind to a report parameter, use this syntax: params[parametername].value as the expression.
To bind to a properties file, set the Resource file in the Report's top-level properties. From there you can just use the property key value to bind the entry to the Data Source.
Good Luck!
An alternative to the good #Mystik's "Property binding" solution is externalizing to a connection profile.
Create a data source (say "DS"), setting up a correct configuration of the parameters to connect to a DB.
Right click on "DS" > Externalize to Connection Profile... > check both options, set a name for the Connection Profile, Ok > set the path and filename were to save the Connection Profile Store (say "reportName.cps"), uncheck Encrypt... (in this way we can modify information in the XML file by hand).
Now we have "reportName.cps", an XML file that we can modify according to the environment where we place our report (development, production,...). The problem is that "DS" has loaded statically those info from "reportName.cps". It loads them dinamically if it can find "reportName.cps" in the absolute path we specified. So changing environment the file path will be different and the report won't find our file. To tell the report the correct location of the file and load it dynamically let's write a script:
Setup a beforeOpen script to use the connection profile that is deployed in the resource folder which can be different for every environment:
var myresourcefolder = reportContext.getDesignHandle().getResourceFolder();
this.setExtensionProperty("OdaConnProfileStorePath", myresourcefolder + "/reportName.cps");
For those struggling configuring a connection profile, the files must look as follow (exemple using PostgreSQL as an exemple):
db-config-birt.xml (or whatever name)
<?xml version="1.0"?>
<DataTools.ServerProfiles version="1.0">
<profile autoconnect="No" desc="" id="uuid" name="MyPostgreSQL"
providerID="org.eclipse.birt.report.data.oda.jdbc">
<baseproperties>
<property name="odaDriverClass" value="org.postgresql.Driver"/>
<property name="odaURL" value="jdbc:postgresql://XX:5432/YY"/>
<property name="odaPassword" value="zzz"/>
<property name="odaUser" value="abc"/>
</baseproperties>
</profile>
</DataTools.ServerProfiles>
The key points here are:
The xml MUST start with <?xml version="1.0"?> (or <?xml version="1.0" encoding="UTF-8" standalone="no"?> but when I was using it, I have having a parsing exception while deploying on Tomcat)
The properties keys MUST be odaDriverClass, odaURL, odaPassword, odaUser (order doesn't matter)
This file must have the right to be accessed, for e.g. chmod 664 this file
If any of the 2 conditions above aren't met, Birt will throw a laconic :
org.eclipse.birt.report.engine.api.EngineException: An exception occurred during processing. Please see the following message for details:
Cannot open the connection for the driver: org.eclipse.birt.report.data.oda.jdbc.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: Missing properties in Connection.open(Properties). ;
org.eclipse.datatools.connectivity.oda.OdaException: Unable to find or access the named profile (MyPostgreSQL) in profile store path (/opt/tomcat/mytomcat/conf/db-config-birt.xml). ;
org.eclipse.datatools.connectivity.oda.OdaException ;
Then in the report (myreport.rptdesign), in the XML of it, the datasource must then look like that:
myreport.rptdesign (or whatever name)
<data-sources>
<oda-data-source extensionID="org.eclipse.birt.report.data.oda.jdbc" name="MyPostgreSQL" id="320">
<property name="OdaConnProfileName">MyPostgreSQL</property>
<property name="OdaConnProfileStorePath">/opt/tomcat/mytomcat/conf/db-config-birt.xml</property>
</oda-data-source>
</data-sources>
Obviously, you will adapt the OdaConnProfileStorePath to suit your needs