Restcomm - It is Possible change the Database configuration - jboss7.x

This is my first steps with Restcomm,
I got install the restcomm (jboss version), on the first time I got to hear the welcome message, and I got to access to the admin page.
Then, I tried to restart the jboss, I saw some errors. And I tried to change the dabase in the ibatis.conf from hsql to postgresql.
My question is:
If is possible to change the database (users,roles and rules) and where I can find the instructions for do this.
And I have another question
<dataSource type="POOLED">
<property name="driver" value="org.hsqldb.jdbcDriver"/>
<property name="url" value="jdbc:hsqldb:file://${data}/restcomm;ifexists=true;hsqldb.write_delay=false;shutdown=true"/>
<property name="username" value="sa"/>
<property name="password" value=""/>
</dataSource>
Where the restcomm save the data of the hsql, because I deleted all, install again and then my problem continues.

The following link shows how to configure a different database on Restcomm (MariaDB).
http://docs.telestax.com/restcomm-install-and-configure-restcomm-to-use-mariadb/
You will also need the driver to configure the PostgreSQL DB.

Related

Hibernate Search & Lucene - set write timeout lock

I have an
org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#/XXXXX/User_Index/write.lock
exception and I read that the write timeout lock should be increased from the default 1 second.
(
It is interesting that previously I didn't have this exception but I work on a task to use Spring on the project. There is a small chance that there are more, competing transactions trying to get access to the index...? I don't think I think the Spring transaction is configured properly:
<!-- for the #Transactional annotations -->
<tx:annotation-driven />
<context:component-scan base-package="XXX.audit, XXX.authorization, XXX.policy, XXX.printing, XXX.provisioning, XXX.service.plainspring" />
<!-- defining Transaction Manager for Spring -->
<bean id="transactionManager" class="org.springframework.orm.hibernate4.HibernateTransactionManager">
<property name="dataSource" ref="dataSource" />
<property name="sessionFactory" ref="sessionFactory" />
</bean>
)
So I tried to configure the write lock timeout like
<bean id="sessionFactory" class="org.springframework.orm.hibernate4.LocalSessionFactoryBean" lazy-init="true">
...
<property name="hibernateProperties">
<props>
...
<prop key="hibernate.search.lucene_version">LUCENE_35</prop>
<prop key="hibernate.search.default.indexwriter.writeLockTimeout">20000</prop>
...
</property>
<property name="dataSource">
<ref bean="dataSource"/>
</property>
</bean>
but no success. Apache Lucene doesn't have config file. Also there is no Lucene code, only Hibernate Search is used (i.e. not possible to set the value of an IndexWriter)
How can I configure the the write lock timeout?
Apache Lucene 3.5
Hibernate Search 4.1.1
Thanks,
V.
There is no option to configure the IndexWriter lock timeout, as this should never be needed.
If you see such a timeout happening it's usually because of either of:
There is a lock file in the index directory as a left over from a crashed JVM
The configuration isn't suitable for the architecture of the application
Check the left over scenario first: shut down your application and see if there is a file name write.lock. If the application is not running it's safe to delete this file.
If that's not the case then you probably have two different instances of Hibernate Search attempting to use the same index directory, and both attempting to write to it.
That's not a valid configuration and you're getting the exception because the index is already locked by te other instance; having a lock timeout increase would only have you wait for a very long time - possibly until the other application is shut down.
Don't share indexes among applications; if you really need to do so, check the manual for the JMS based backends or other non-default backends which allow for multiple applications to share a single IndexWriter.
Finally, please consider upgrading. These versions are extremely old.

SharedRDD code for ignite works on setup of single server but fails with exception when additional server added

I have 2 server nodes running collocated with spark worker. I am using shared ignite RDD to save my dataframe. My code works fine when I work with only one server node stared, if I start both server nodes code fails with
Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=null, state=STOPPING]
DiscoverySpi is configured as below
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder">-->
<property name="shared" value="true"/>
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>v-in-spark-01:47500..47509</value>
<value>v-in-spark-02:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
I know this exception generally means ignite instanace either not started or stopped and operation tried with same, but I don't think this is the case for reasons that with single server node it works fine and also I am not explicitly closing ignite instance in my program.
Also in my code flow I do perform operations in transaction which works, so it is like
create cache1 : works fine
Create cache2 : works fine
put value in cache1 ; works fine
igniteRDD.saveValues on cache2 : This step failes with above mentioned exception.
USE this link for complete error trace
caused by part is pasted below here also
Caused by: java.lang.IllegalStateException: Grid is in invalid state to perform this operation. It either not started yet or has already being or have stopped [gridName=null, state=STOPPING]
at org.apache.ignite.internal.GridKernalGatewayImpl.illegalState(GridKernalGatewayImpl.java:190)
at org.apache.ignite.internal.GridKernalGatewayImpl.readLock(GridKernalGatewayImpl.java:90)
at org.apache.ignite.internal.IgniteKernal.guard(IgniteKernal.java:3151)
at org.apache.ignite.internal.IgniteKernal.getOrCreateCache(IgniteKernal.java:2739)
at org.apache.ignite.spark.impl.IgniteAbstractRDD.ensureCache(IgniteAbstractRDD.scala:39)
at org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:164)
at org.apache.ignite.spark.IgniteRDD$$anonfun$saveValues$1.apply(IgniteRDD.scala:161)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$28.apply(RDD.scala:883)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1897)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
... 3 more</pre>
It looks like the node embedded in the executor process is stopped for some reason while you are still trying to run the job. To my knowledge the only way for this to happen is to stop the executor process. Can this be the case? Is there anything in the log except the trace?

ActiveMQ connection in Fabric8 using Blueprint instead of DS

In Fabric8, the preferred way to obtain an ActiveMQ connection is via the mq-fabric profile, which provides an ActitveMQConnection object via Declarative Services. An example of this is given on GitHub, which works just fine.
However, I've yet to find a way for Declarative Services and Blueprint Services to collaborate in Fabric8 (or any OSGI-environment, really), thus, my OSGI application must either use DS or blueprint. Mixing both doesn't seem to be an option.
If you want to use blueprint (which I do), you must first create a broker through the web UI, then go back to the console and type cluster-list, finding the port that Fabric8 assigned to the broker and then configure a connection in blueprint like so:
<bean id="activemqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://mydomain:33056" />
<property name="userName" value="admin" />
<property name="password" value="admin" />
</bean>
While this does work, it's not exactly deployment-friendly, as it involves a few manual steps that I'd like to avoid if possible. The main issue is that I don't know what that port is going to be. I've combed through the config files and couldn't find it anywhere.
Is there a cleaner, more automated way to obtain an ActiveMQ connection in Fabric8 via blueprint, or must we use Declarative Services?
Stumbled across a solution to this issue in the fabric-camel-demo, which illustrates how to instantiate an ActiveMQConnectionFactory bean in Fabric8 via Blueprint.
<!-- use the fabric protocol in the brokerURL to connect to the ActiveMQ broker registered as default name -->
<!-- notice we could have used amq as the component name in Camel, and avoid any configuration at all,
as the amq component is provided out of the box when running in fabric -->
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="discovery:(fabric:default)"/>
<property name="userName" value="admin"/>
<property name="password" value="admin"/>
</bean>
Hope this helps!

Infinispan distributed cluster with shared index

Does anybody have a working example of how to configure a cluster of nodes to share an index using the infinispan directory provider? All the documentation on Infinispan (the documentation is seriously lacking btw) implies that it should be as easy as having some properties set but no matter how I try I cannot get it to work. The nodes in the cluster find eachother fine and I can do get operations on one node and get object that were put on another. But as soon as I do queries (use the index) it just starts to fail.
My infinispan config:
<global>
<transport clusterName="SomeCluster">
<properties>
<property name="configurationFile" value="jgroups-udp.xml" />
</properties>
</transport>
</global>
<namedCache name="access">
<clustering mode="distribution" />
<indexing enabled="true" indexLocalOnly="true">
<properties>
<property name="default.directory_provider" value="infinispan"/>
<property name="default.worker.backend" value="jgroups"/>
</properties>
</indexing>
</namedCache>
I have not found one example/tutorial which covers a distributed cache with a shared index, and I consider my google-fu to be great. I have asked on the infinispan community forum but havent gotten any replies there.
The errors I get are all related to the fact that only one node can be able to write to the index (the master node) but the config above, which according some the documentation on Hibernet Search should make one node a master node, does nothing as far as I can se.
Edit:Im using Infinispan 6.0.2.Final
Rather than JGroups backend I'd use InfinispanIndexManager - this manager already provides its own backend.
<indexing enabled="true" indexLocalOnly="true">
<properties>
<property name="default.indexmanager" value="org.infinispan.query.indexmanager.InfinispanIndexManager" />
<property name="default.exclusive_index_use" value="false" />
<property name="default.metadata_cachename" value="lucene_metadata_repl" />
<property name="default.data_cachename" value="lucene_data_dist" />
<property name="default.locking_cachename" value="lucene_locking_repl" />
<property name="lucene_version" value="LUCENE_36" />
</properties>
</indexing>
Now, configure all the caches to be clustered (distributed or replicated). Without specifying the cache configuration this way, the three caches are created using the default cache configuration - which is by default non-clustered.
I am not sure about the exclusive_index_use, though, maybe it's not necessary.
I agree that Infinispan documentation could be much better, usually I have to fallback to investigating source code. For examples of indexing configuration, you can look into the infinispan-query module/src/test/resources.

Glassfish doc root URI patterns are not resolving

I have a Glassfish server working on windows and a problem with alternatedocroot. When I use this
<property name="alternatedocroot_1" value="from=/images/9075.png dir=C:\member\"/>
and request server:8080/myapp/images/9075.png the correct file is displayed. However the directory is full of images so I have tried all of these at different times
<property name="alternatedocroot_1" value="from=/images/\*.png dir=C:\member\"/>
<property name="alternatedocroot_1" value="from=/images/*.png dir=C:\member\"/>
<property name="alternatedocroot_1" value="from=/images/\* dir=C:\member\"/>
<property name="alternatedocroot_1" value="from=/images/* dir=C:\member\"/>
but the same request (server:8080/myapp/images/9075.png) produces a 404 error. I am sure I am making a silly error but I can't see it. I hope someone can help.
Did you try this?
<property name="alternatedocroot_1" value="from=/images/* dir=C:\\member\\"/>
This may have to do with windows back-slashes requires escaping.