I have an object with embedded members that I'm making persistent without problems using RDBMS and MySQL.
When I change the datastore to S3 (json plugin) I get the following exception:
Dec 30, 2011 9:50:30 AM org.datanucleus.state.JDOStateManagerImpl isLoaded
WARNING: Exception thrown by StateManager.isLoaded
This constructor is only for objects using application identity.
org.datanucleus.exceptions.NucleusUserException: This constructor is only for objects using application identity.
at org.datanucleus.state.JDOStateManagerImpl.initialiseForHollowAppId(JDOStateManagerImpl.java:226)
at org.datanucleus.state.ObjectProviderFactory.newForHollowPopulatedAppId(ObjectProviderFactory.java:119)
at org.datanucleus.store.json.fieldmanager.FetchFieldManager.getObjectFromJSONObject(FetchFieldManager.java:322)
at org.datanucleus.store.json.fieldmanager.FetchFieldManager.fetchObjectField(FetchFieldManager.java:250)
at org.datanucleus.state.AbstractStateManager.replacingObjectField(AbstractStateManager.java:2228)
at myproject.MyObject.jdoReplaceField(Unknown Source)
at myproject.MyObject.jdoReplaceFields(Unknown Source)
at org.datanucleus.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:1949)
at org.datanucleus.state.JDOStateManagerImpl.replaceFields(JDOStateManagerImpl.java:1976)
at org.datanucleus.store.json.JsonPersistenceHandler.fetchObject(JsonPersistenceHandler.java:269)
at org.datanucleus.state.JDOStateManagerImpl.loadFieldsFromDatastore(JDOStateManagerImpl.java:1652)
at org.datanucleus.state.JDOStateManagerImpl.loadSpecifiedFields(JDOStateManagerImpl.java:1254)
at org.datanucleus.state.JDOStateManagerImpl.isLoaded(JDOStateManagerImpl.java:1742)
at myproject.MyObject.jdoGetmember_(Unknown Source)
at myproject.MyObject.getMember(Unknown Source)
member_ in myproject.MyObject is defined as:
#Persistent
#Embedded(members = {
...
})
private Member member_;
and
#PersistenceCapable(detachable="true")
#EmbeddedOnly
public class Member implements Serializable {
(no application identity, no key)
The jdoconfig.xml is roughly:
<jdoconfig
xmlns="http://java.sun.com/xml/ns/jdo/jdoconfig"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://java.sun.com/xml/ns/jdo/jdoconfig">
<persistence-manager-factory name="trans-optional">
<property name="javax.jdo.PersistenceManagerFactoryClass"
value="org.datanucleus.api.jdo.JDOPersistenceManagerFactory"/>
<property name="datanucleus.ConnectionURL"
value="amazons3:http://s3.amazonaws.com/"/>
<property name="datanucleus.ConnectionUserName"
value="..."/>
<property name="datanucleus.ConnectionPassword"
value="..."/>
<property name="datanucleus.cloud.storage.bucket"
value="mybucket"/>
</persistence-manager-factory>
</jdoconfig>
I've been to the Supported Features table but I must admit I don't fully understand it.
Does it say that the json plugin does NOT supports embedded objects?
Why do my embedded objects need to have application identity? If I define them with application identity I'm also asked to provide a key and I don't want that, I want them to be embedded.
Any help will be much appreciated!
As the Supported Features table says very clearly (to me), there is a CROSS against the JSON datastore column for the feature "Embedded PC", hence it is not supported for that datastore. Obviously if some user/company wanted such a feature they could either
Update the JSON plugin to support it, like was done for the ODF
plugin for example
Sponsor that work.
Alternatively, don't use embedded objects with that datastore.
Related
I'm having trouble reading a YAML file that consists of a list in MuleSoft.
I have my YAML file set up as a Configuration property in my Global Elements, in the following structure.
applications:
- appId: "123456"
appName: Application One
- appId: "456789"
appName: Application Two
I am able to read the values when there's no list. But when I have it set up as a list, MuleSoft throws this error:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Failed to deploy artifact 'test', see below +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
org.mule.runtime.deployment.model.api.DeploymentException: Failed to deploy artifact [test]
Caused by: org.mule.runtime.api.exception.MuleRuntimeException: org.mule.runtime.deployment.model.api.DeploymentInitException: ConfigurationPropertiesException: Configuration properties does not support type a list of complex types. Complex type keys are: appId,appName
Caused by: org.mule.runtime.deployment.model.api.DeploymentInitException: ConfigurationPropertiesException: Configuration properties does not support type a list of complex types. Complex type keys are: appId,appName
Caused by: org.mule.runtime.core.api.config.ConfigurationException: Configuration properties does not support type a list of complex types. Complex type keys are: appId,appName
Caused by: org.mule.runtime.config.internal.dsl.model.config.ConfigurationPropertiesException: Configuration properties does not support type a list of complex types. Complex type keys are: appId,appName
Pleas help me out: Is this not the right way to use a YAML file? Do I need to change my format?
Thank you!
Configuration properties does not support types that are a list of complex types. The properties have to be converted into spring properties on the backend
You can only use simple lists:
applications:
- "123456"
- "456789"
But a simple object would work just as well for properties:
applications:
"123456":
name: Application One
"456789":
name: Application Two
Then you could look up dynamically by the appId like:
<set-variable variableName="appId" value="123456" />
<logger level="INFO" message="#[p('applications.' ++ vars.appId ++ '.name')]" />
I need to read the properties which are stated in my one of the .yaml file(eg banner.yaml). These properties should be read in a java class so that they can be accessed and the operation can be performed wisely.
This is my label.yaml file
/content/documents/administration/labels:
jcr:primaryType: hippostd:folder
jcr:mixinTypes: ['mix:referenceable']
jcr:uuid: 7ec0e757-373b-465a-9886-d072bb813f58
hippostd:foldertype: [new-resource-bundle, new-untranslated-folder]
/global:
jcr:primaryType: hippo:handle
jcr:mixinTypes: ['hippo:named', 'mix:referenceable']
jcr:uuid: 31e4796a-4025-48a5-9a6e-c31ba1fb387e
hippo:name: Global
How should I access the hippo:name property which should return me Global as value in one of the java class ?
Any help will be appreciated.
Create a class which extends BaseHstComponent, which allows you to make use of HST Content Bean's
Create a session Object, for this you need to have valid credentials of your repository.
Session session = repository.login("admin", "admin".toCharArray());
Now, create object of javax.jcr.Node, for this you require relPath to your .yaml file.
In your case it will be /content/documents/administration/labels/global
Node node = session.getRootNode().getNode("content/articles/myarticle");
Now, by using getProperty method you can access the property.
node.getProperty("hippotranslation:locale");
you can refere the link https://www.onehippo.org/library/concepts/content-repository/jcr-interface.html
you can't read a yaml file from within the application. The yaml file is bootstrapped in the repository. The data you show represents a resource bundle. You can access it programmatically using the utility class ResourceBundleUtils#getBundle
Or on a template use . Then you can use keys as normal.
I strongly suggest you follow our tutorials before continuing.
more details here:
https://www.onehippo.org/library/concepts/translations/hst-2-dynamic-resource-bundles-support.html
I am currently using Spring Boot Starter 1.4.2.RELEASE, and Geode Core 1.0.0-incubating via Maven, against a local Docker configuration consisting of a Geode Locator, and 2 cache nodes.
I've consulted the documentation here:
http://geode.apache.org/docs/guide/developing/distributed_regions/locking_in_global_regions.html
I have configured a cache.xml file for use with my application like so:
<?xml version="1.0" encoding="UTF-8"?>
<client-cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<pool name="serverPool">
<locator host="localhost" port="10334"/>
</pool>
<region name="testRegion" refid="CACHING_PROXY">
<region-attributes pool-name="serverPool"
scope="global"/>
</region>
</client-cache>
In my Application.java I have exposed the region as a bean via:
#SpringBootApplication
public class Application {
#Bean
ClientCache cache() {
return new ClientCacheFactory()
.create();
}
#Bean
Region<String, Integer> testRegion(final ClientCache cache) {
return cache.<String, Integer>getRegion("testRegion");
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
And in my "service" DistributedCounter.java:
#Service
public class DistributedCounter {
#Autowired
private Region<String, Integer> testRegion;
/**
* Using fine grain lock on modifier.
* #param counterKey {#link String} containing the key whose value should be incremented.
*/
public void incrementCounter(String counterKey) {
if(testRegion.getDistributedLock(counterKey).tryLock()) {
try {
Integer old = testRegion.get(counterKey);
if(old == null) {
old = 0;
}
testRegion.put(counterKey, old + 1);
} finally {
testRegion.getDistributedLock(counterKey).unlock();
}
}
}
I have used gfsh to configure a region named /testRegion - however there is no option to indicate that it's type should be "GLOBAL", only a variety of other options are available - ideally this should be a persistent, and replicated cache though so the following command:
create region --name=/testRegion --type=REPLICATE_PERSISTENT
Using the how-to at: http://geode.apache.org/docs/guide/getting_started/15_minute_quickstart_gfsh.html it is easy to see the functionality of persistence and replication on my two node configuration.
However, the locking in DistributedCounter, above, does not cause any errors - but it just does not work when two processes attempt to acquire a lock on the same "key" - the second process is not blocked from acquiring the lock. There is an earlier code sample from the Gemfire forums which uses the DistributedLockService - which the current documentation warns against using for locking region entries.
Is the use-case of fine-grained locking to support a "map" of atomically incremental longs a supported use case, and if so, how to appropriately configure it?
The Region APIs for DistributedLock and RegionDistributedLock only support Regions with Global scope. These DistributedLocks have locking scope within the name of the DistributedLockService (which is the full path name of the Region) only within the cluster. For example, if the Global Region exists on a Server, then the DistributedLocks for that Region can only be used on that Server or on other Servers within that cluster.
Cache Clients were originally a form of hierarchical caching, which means that one cluster could connect to another cluster as a Client. If a Client created an actual Global region, then the DistributedLock within the Client would only have a scope within that Client and the cluster that it belongs to. DistributedLocks do not propagate in anyway to the Servers that such a Client is connected to.
The correct approach would be to write Function(s) that utilize the DistributedLock APIs on Global regions that exist on the Server(s). You would deploy those Functions to the Server and then invoke them on the Server(s) from the Client.
In general, use of Global regions is avoided because every individual put acquires a DistributedLock within the Server's cluster, and this is a very expensive operation.
You could do something similar with a non-Global region by creating a custom DistributedLockService on the Servers and then use Functions to lock/unlock around code that you need to be globally synchronized within that cluster. In this case, the DistributedLock and RegionDistributedLock APIs on Region (for the non-Global region) would be unavailable and all locking would have to be done within a Function on the Server using the DistributedLockService API.
This only works for server side code (in Functions for example).
From client code you can implement locking semantics using "region.putIfAbsent".
If 2 (or more) clients call this API on the same region and key, only one will successfully put, which is indicated by a return value of null. This client is considered to hold the lock. The other clients will get the object that was put by the winner. This is handy because, if the value you "put" contains a unique identifier of the client, then the losers even know who is holding the lock.
Having a region entry represent a lock has other nice benefits. The lock survives across failures. You can use region expiration to set the maximum lease time for a lock, and, as mentioned previously, its easy to tell who is holding the lock.
Hope this helps.
It seems that GFSH does not provide an option to provide the correct scope=GLOBAL.
Maybe you could start a server with --cache-xml-file option... which would point to a cache.xml file.
The cache.xml file should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schema.pivotal.io/gemfire/cache" xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd" version="8.1" lock-lease="120" lock-timeout="60" search-timeout="300" is-server="true" copy-on-read="false">
<cache-server port="0"/>
<region name="testRegion">
<region-attributes data-policy="persistent-replicate" scope="global"/>
</region>
</cache>
Also the client configuration does not need to define the scope in region-attributes
Firstly, I would like to state our environment details.
We are trying to use EJB-hibernate with sql Azure to create apps on Azure cloud using Eclipse.
We needed to create and transact on databases dynamically. We are able to create databases dynamically. However, on trying to transact on these we are getting an error:
"java.sql.SQLException: No suitable driver found for connection url"
When we tried statically transacting using jpa was not a problem. However, dynamic transactions cannot be done. The entitymanager object is created but not able to connect database.
Could someone help us and explain how we can handle transactions using JPA for dynamically created databases.
Thanks,
Saugata
[edit] We are using the following persistence.xml:
>org.hibernate.ejb.HibernatePersistence
java:jboss/EDS</jta-data-source> -->
net.oauth.database.Co
net.oauth.database.Cr
value="org.hibernate.transaction.JTATransactionFactory" />
value="org.hibernate.transaction.JBossTransactionManagerLookup" />
Our code to connect to the db is as follows:
Map configOverrides = new HashMap();
configOverrides.put("hibernate.connection.password", "");
configOverrides.put("hibernate.connection.username", "");
configOverrides.put("hibernate.connection.driver_class","com.microsoft.sqlserver.jdbc.SQLServerDriver");
configOverrides.put("hibernate.connection.url", "jdbc:sqlsever://;" + "databaseName=;user=;password=");
EntityManagerFactory factory = Persistence.createEntityManagerFactory(ENTERPRISE_UNIT_NAME, configOverrides);
Please note that we are trying to create and connect to db dynamically and hence to do not the db created statically.
For this we are getting the error:
"java.sql.SQLException: No suitable driver found for connection url"
Create a persistence.xml with a persistence unit and put everything there which is static (eg database dialect, logging parameters, etc.)
Then use the following method to create the entity manager:
javax.persistence.Persistence.createEntityManagerFactory(String persistenceUnitName, Map properties);
Supply the variable parameters in the map, like this:
properties.put("hibernate.connection.url", "jdbc:postgresql://127.0.0.1/test");
properties.put("hibernate.connection.username", "joe");
properties.put("hibernate.connection.password", "pass");
I have multiple BIRT reports that obtains the data from the same jdbc data source.
Is it possible to obtain the conection parameters (Driver URL, User Name, and Password) from an external property file or similar?
One you create a functional data source, you can add that data source to a report library that can be imported and used by all BIRT reports in your system. The source inside the library can have static connection attributes, or you can abstract them using externalized properties.
If you want to externalize the connection info, you will need to tweak the Data source itself. Inside the Data Source Editor, there is a "Property Binding" section that allows you to abstract all the values governing the data connection. From there you can bind the values (using the expression editor) to either report parameters or a properties file.
To bind to a report parameter, use this syntax: params[parametername].value as the expression.
To bind to a properties file, set the Resource file in the Report's top-level properties. From there you can just use the property key value to bind the entry to the Data Source.
Good Luck!
An alternative to the good #Mystik's "Property binding" solution is externalizing to a connection profile.
Create a data source (say "DS"), setting up a correct configuration of the parameters to connect to a DB.
Right click on "DS" > Externalize to Connection Profile... > check both options, set a name for the Connection Profile, Ok > set the path and filename were to save the Connection Profile Store (say "reportName.cps"), uncheck Encrypt... (in this way we can modify information in the XML file by hand).
Now we have "reportName.cps", an XML file that we can modify according to the environment where we place our report (development, production,...). The problem is that "DS" has loaded statically those info from "reportName.cps". It loads them dinamically if it can find "reportName.cps" in the absolute path we specified. So changing environment the file path will be different and the report won't find our file. To tell the report the correct location of the file and load it dynamically let's write a script:
Setup a beforeOpen script to use the connection profile that is deployed in the resource folder which can be different for every environment:
var myresourcefolder = reportContext.getDesignHandle().getResourceFolder();
this.setExtensionProperty("OdaConnProfileStorePath", myresourcefolder + "/reportName.cps");
For those struggling configuring a connection profile, the files must look as follow (exemple using PostgreSQL as an exemple):
db-config-birt.xml (or whatever name)
<?xml version="1.0"?>
<DataTools.ServerProfiles version="1.0">
<profile autoconnect="No" desc="" id="uuid" name="MyPostgreSQL"
providerID="org.eclipse.birt.report.data.oda.jdbc">
<baseproperties>
<property name="odaDriverClass" value="org.postgresql.Driver"/>
<property name="odaURL" value="jdbc:postgresql://XX:5432/YY"/>
<property name="odaPassword" value="zzz"/>
<property name="odaUser" value="abc"/>
</baseproperties>
</profile>
</DataTools.ServerProfiles>
The key points here are:
The xml MUST start with <?xml version="1.0"?> (or <?xml version="1.0" encoding="UTF-8" standalone="no"?> but when I was using it, I have having a parsing exception while deploying on Tomcat)
The properties keys MUST be odaDriverClass, odaURL, odaPassword, odaUser (order doesn't matter)
This file must have the right to be accessed, for e.g. chmod 664 this file
If any of the 2 conditions above aren't met, Birt will throw a laconic :
org.eclipse.birt.report.engine.api.EngineException: An exception occurred during processing. Please see the following message for details:
Cannot open the connection for the driver: org.eclipse.birt.report.data.oda.jdbc.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: Missing properties in Connection.open(Properties). ;
org.eclipse.datatools.connectivity.oda.OdaException: Unable to find or access the named profile (MyPostgreSQL) in profile store path (/opt/tomcat/mytomcat/conf/db-config-birt.xml). ;
org.eclipse.datatools.connectivity.oda.OdaException ;
Then in the report (myreport.rptdesign), in the XML of it, the datasource must then look like that:
myreport.rptdesign (or whatever name)
<data-sources>
<oda-data-source extensionID="org.eclipse.birt.report.data.oda.jdbc" name="MyPostgreSQL" id="320">
<property name="OdaConnProfileName">MyPostgreSQL</property>
<property name="OdaConnProfileStorePath">/opt/tomcat/mytomcat/conf/db-config-birt.xml</property>
</oda-data-source>
</data-sources>
Obviously, you will adapt the OdaConnProfileStorePath to suit your needs