OpenIMS - Mobicents AS integration - restcomm

I am newbie to use Mobicents AS and would like to integrate mobicents AS with OpenIMScore. Could any one referred me link or guide me here
Below are my configuration details and running in VM:
10.0.0.9 hss.net1.test
10.0.0.10 pcscf.net1.test
10.0.0.11 icscf.net1.test
10.0.0.12 scscf.net1.test
I am able to make a Voice/Video call with in the OpenIMS.
Can any one guide me here please.
Regards,
-kranti

After authentication, the S-CSCF should send a SAR to the HSS. In the response, there should be a Cx-User-Data AVP which contains a XML document with the IMS data for the subscriber. As a starting point, you could provision a single InitialFilterCriteria with no TriggerPoints, and a default ApplicationServer which will handle all SIP methods.
Here's a simple working example from my test bed:
<?xml version="1.0" encoding="UTF-8"?>
<IMSSubscription>
<PrivateID>ronmcleod#provider1.test</PrivateID>
<ServiceProfile>
<PublicIdentity>
<BarringIndication>0</BarringIndication>
<Identity>sip:ronmcleod#provider1.test</Identity>
</PublicIdentity>
<InitialFilterCriteria>
<Priority>0</Priority>
<ApplicationServer>
<ServerName>sip:defaultapp#tas.core.ims1.test</ServerName>
<DefaultHandling>0</DefaultHandling>
</ApplicationServer>
</InitialFilterCriteria>
</ServiceProfile>
</IMSSubscription>

Related

Why flow element via attribute is not working in SUMO route file

I'm using following flow element in my routes file:
<vType id="passenger" vClass="passenger" accel="2.6" decel="4.5" sigma="0.5" length="2.5" minGap="2.5" maxSpeed="12"/>
<flow id="flow0" type="passenger" from="center0down" via="bottom6down bottom6up" to="center0up" begin="0" period="3" number="30" />
But SUMO-GUI shows following error:
Whereas it is clearly stated here that via attribute is defined for incomplete trips and flows. Any suggestion, what i'm doing wrong?
The SUMO Wiki always refers to the current state of development while you are using SUMO 0.30.0. The functionality was implemented in September 2017, so it is in 0.32.0.

Deltek Vision 7.6 - Column: does not exist when UpdateProject

I'm currently working in an integration with Deltek Vision 7.6, I'm using the SOAP API, it exposes all actions and I'm creating and updating records currently.
The problem is, adding a mew field in the database table and in Deltek Vision, executing the same call it returns an error like this:
<?xml version="1.0" encoding="UTF-8"?>
<DLTKVisionMessage>
<ReturnCode>ErrSave</ReturnCode>
<ReturnDesc>An unexpected error has occured while saving</ReturnDesc>
<ChangesNotProcessed>
<InsertErrors>
<Error rowNum="1">
<ErrorCode>InsertError</ErrorCode>
<Message>Column: does not exist.</Message>
<Table>Projects_MilestoneCompletionLog</Table>
<ROW new="1" mod="1" del="0">
<WBS1>100434</WBS1>
<WBS2>1014</WBS2>
<WBS3>SD</WBS3>
<Seq>a0D0m000000cf9NEAQ</Seq>
<CustMilestoneNumber>MS01</CustMilestoneNumber>
<CustMilestoneName>DM91 - Data Maintenance SAQ</CustMilestoneName>
<CustAmount>1150.0</CustAmount>
<CustSiteTrackerDate>2018-07-06T10:01:50</CustSiteTrackerDate>
</ROW>
</Error>
</InsertErrors>
</ChangesNotProcessed>
<Detail>Column: does not exist.</Detail>
<CallStack>UpdateProject.SendDataToDeltekVision</CallStack>
</DLTKVisionMessage>
The problematic field is: CustSiteTrackerDate if I remove this from Vision and Database the update call happens correctly.
Does anyone knows if after create a new custom field in Deltek is anything special we need to do to allow the update calls throw the API?
Thanks
I have been working with the Deltek Soap API as well and found this in some of the documentation:
XML Schema for Vision Web Services/APIs The data that you are adding
or updating in the Vision database must be sent in XML format. The
format of the XML data must comply with the schema. The order of the
fields in your XML file must match the order of the fields that is
defined by the schema. If your XML file does not match the required
schema and the order of the fields, you will receive an error when you
use web services to update the Vision database. Each applicable Info
Center in Vision has an XML schema defined. Examples of the schema for
each Info Center are included in schema files that are located on the
Vision Web/app server in \Vision\Web\Xsd directory
( is the directory where Deltek Vision is installed). The
names of the schema files start with the generic Info-Center-name
followed by ‘_Schema.xsd.’ For example, the name of the XML schema
file used for Employee Info Center would be ‘Employee_Schema.Xsd.’
It may be that you need to add the new field to the Info Center XML, go to the server hosting your Vision/Web/App and find the infocenter XML that this new field should exist in and make sure it is there.

Configuring and Using Geode Regions and Locks for Atomic Data Structures

I am currently using Spring Boot Starter 1.4.2.RELEASE, and Geode Core 1.0.0-incubating via Maven, against a local Docker configuration consisting of a Geode Locator, and 2 cache nodes.
I've consulted the documentation here:
http://geode.apache.org/docs/guide/developing/distributed_regions/locking_in_global_regions.html
I have configured a cache.xml file for use with my application like so:
<?xml version="1.0" encoding="UTF-8"?>
<client-cache
xmlns="http://geode.apache.org/schema/cache"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://geode.apache.org/schema/cache
http://geode.apache.org/schema/cache/cache-1.0.xsd"
version="1.0">
<pool name="serverPool">
<locator host="localhost" port="10334"/>
</pool>
<region name="testRegion" refid="CACHING_PROXY">
<region-attributes pool-name="serverPool"
scope="global"/>
</region>
</client-cache>
In my Application.java I have exposed the region as a bean via:
#SpringBootApplication
public class Application {
#Bean
ClientCache cache() {
return new ClientCacheFactory()
.create();
}
#Bean
Region<String, Integer> testRegion(final ClientCache cache) {
return cache.<String, Integer>getRegion("testRegion");
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
And in my "service" DistributedCounter.java:
#Service
public class DistributedCounter {
#Autowired
private Region<String, Integer> testRegion;
/**
* Using fine grain lock on modifier.
* #param counterKey {#link String} containing the key whose value should be incremented.
*/
public void incrementCounter(String counterKey) {
if(testRegion.getDistributedLock(counterKey).tryLock()) {
try {
Integer old = testRegion.get(counterKey);
if(old == null) {
old = 0;
}
testRegion.put(counterKey, old + 1);
} finally {
testRegion.getDistributedLock(counterKey).unlock();
}
}
}
I have used gfsh to configure a region named /testRegion - however there is no option to indicate that it's type should be "GLOBAL", only a variety of other options are available - ideally this should be a persistent, and replicated cache though so the following command:
create region --name=/testRegion --type=REPLICATE_PERSISTENT
Using the how-to at: http://geode.apache.org/docs/guide/getting_started/15_minute_quickstart_gfsh.html it is easy to see the functionality of persistence and replication on my two node configuration.
However, the locking in DistributedCounter, above, does not cause any errors - but it just does not work when two processes attempt to acquire a lock on the same "key" - the second process is not blocked from acquiring the lock. There is an earlier code sample from the Gemfire forums which uses the DistributedLockService - which the current documentation warns against using for locking region entries.
Is the use-case of fine-grained locking to support a "map" of atomically incremental longs a supported use case, and if so, how to appropriately configure it?
The Region APIs for DistributedLock and RegionDistributedLock only support Regions with Global scope. These DistributedLocks have locking scope within the name of the DistributedLockService (which is the full path name of the Region) only within the cluster. For example, if the Global Region exists on a Server, then the DistributedLocks for that Region can only be used on that Server or on other Servers within that cluster.
Cache Clients were originally a form of hierarchical caching, which means that one cluster could connect to another cluster as a Client. If a Client created an actual Global region, then the DistributedLock within the Client would only have a scope within that Client and the cluster that it belongs to. DistributedLocks do not propagate in anyway to the Servers that such a Client is connected to.
The correct approach would be to write Function(s) that utilize the DistributedLock APIs on Global regions that exist on the Server(s). You would deploy those Functions to the Server and then invoke them on the Server(s) from the Client.
In general, use of Global regions is avoided because every individual put acquires a DistributedLock within the Server's cluster, and this is a very expensive operation.
You could do something similar with a non-Global region by creating a custom DistributedLockService on the Servers and then use Functions to lock/unlock around code that you need to be globally synchronized within that cluster. In this case, the DistributedLock and RegionDistributedLock APIs on Region (for the non-Global region) would be unavailable and all locking would have to be done within a Function on the Server using the DistributedLockService API.
This only works for server side code (in Functions for example).
From client code you can implement locking semantics using "region.putIfAbsent".
If 2 (or more) clients call this API on the same region and key, only one will successfully put, which is indicated by a return value of null. This client is considered to hold the lock. The other clients will get the object that was put by the winner. This is handy because, if the value you "put" contains a unique identifier of the client, then the losers even know who is holding the lock.
Having a region entry represent a lock has other nice benefits. The lock survives across failures. You can use region expiration to set the maximum lease time for a lock, and, as mentioned previously, its easy to tell who is holding the lock.
Hope this helps.
It seems that GFSH does not provide an option to provide the correct scope=GLOBAL.
Maybe you could start a server with --cache-xml-file option... which would point to a cache.xml file.
The cache.xml file should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<cache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schema.pivotal.io/gemfire/cache" xsi:schemaLocation="http://schema.pivotal.io/gemfire/cache http://schema.pivotal.io/gemfire/cache/cache-8.1.xsd" version="8.1" lock-lease="120" lock-timeout="60" search-timeout="300" is-server="true" copy-on-read="false">
<cache-server port="0"/>
<region name="testRegion">
<region-attributes data-policy="persistent-replicate" scope="global"/>
</region>
</cache>
Also the client configuration does not need to define the scope in region-attributes

How to build Query Parameter using Spring Traverson

I have a Spring Data Rest webservice with QueryDSL Web Support enabled so I can query any of the fields directly like below;
http://localhost:9000/api/prod1007?cinfo1=0126486035
And I was using Traverson to access this service but traverson is not generating the query parameter as above; below is my code (I have tried both withTemplateParameters() and withParameters() in Hop level)
Code:
Map<String,Object> parameters = new HashMap<String,Object>();
parameters.put("cinfo1", "0127498374");
PagedResources<Tbpinstance> items = traverson
.follow(Hop.rel("prod1007"))
.withTemplateParameters(parameters)
.toObject(resourceParameterizedTypeReference);
Any Help is much appreciated. Thanks!
Traverson needs to know where to put those parameters. They could be path parameters, or they could be query parameters. Furthermore, Traverson navigates the service from the root, so the parameters might need to be inserted somewhere in the middle, and not in the final step only.
For these reasons the server needs to clearly tell how to use the parameters. Traverson needs a HATEOAS-"directory" for the service. When Traverson HTTP GETs the http://localhost:9000/api document, it needs to contain a link similar to this:
"_links" : {
"product" : {
"href" : "http://localhost:9000/api/prod1007{?cinfo1}",
"templated" : true
},
}
Now it knows that the cinfo1 parameter is a query parameter and will be able to put it into its place.
#ZeroOne, you are entirely correct, that is what the response from the server should look like. Currently spring-hateoas does not support responses that look like that (I expect it will in the future as I have seen comments by Oliver Gierke indicating that spring-hateoas is going through a major upgrade).
As at the time of writing, to generate responses from the server as you describe, we have used spring-hateoas-ext mentioned in https://github.com/spring-projects/spring-hateoas/issues/169. You can find code at https://github.com/dschulten/hydra-java#affordancebuilder-for-rich-hyperlinks-from-v-0-2-0.
This is a 'drop in replacement' for spring-hateoas' ControllerLinkBuilder.
Here is the maven dependency we use (but check for the latest version).
<!-- Drop in replacement from spring-hateoas ControllerLinkBuilder -->
<dependency>
<groupId>de.escalon.hypermedia</groupId>
<artifactId>spring-hateoas-ext</artifactId>
<version>0.3.0-beta6</version>
</dependency>
Here's the import we use in our ResourceAssemblers.
import static de.escalon.hypermedia.spring.AffordanceBuilder.*;

Birt data source parameters from a property file

I have multiple BIRT reports that obtains the data from the same jdbc data source.
Is it possible to obtain the conection parameters (Driver URL, User Name, and Password) from an external property file or similar?
One you create a functional data source, you can add that data source to a report library that can be imported and used by all BIRT reports in your system. The source inside the library can have static connection attributes, or you can abstract them using externalized properties.
If you want to externalize the connection info, you will need to tweak the Data source itself. Inside the Data Source Editor, there is a "Property Binding" section that allows you to abstract all the values governing the data connection. From there you can bind the values (using the expression editor) to either report parameters or a properties file.
To bind to a report parameter, use this syntax: params[parametername].value as the expression.
To bind to a properties file, set the Resource file in the Report's top-level properties. From there you can just use the property key value to bind the entry to the Data Source.
Good Luck!
An alternative to the good #Mystik's "Property binding" solution is externalizing to a connection profile.
Create a data source (say "DS"), setting up a correct configuration of the parameters to connect to a DB.
Right click on "DS" > Externalize to Connection Profile... > check both options, set a name for the Connection Profile, Ok > set the path and filename were to save the Connection Profile Store (say "reportName.cps"), uncheck Encrypt... (in this way we can modify information in the XML file by hand).
Now we have "reportName.cps", an XML file that we can modify according to the environment where we place our report (development, production,...). The problem is that "DS" has loaded statically those info from "reportName.cps". It loads them dinamically if it can find "reportName.cps" in the absolute path we specified. So changing environment the file path will be different and the report won't find our file. To tell the report the correct location of the file and load it dynamically let's write a script:
Setup a beforeOpen script to use the connection profile that is deployed in the resource folder which can be different for every environment:
var myresourcefolder = reportContext.getDesignHandle().getResourceFolder();
this.setExtensionProperty("OdaConnProfileStorePath", myresourcefolder + "/reportName.cps");
For those struggling configuring a connection profile, the files must look as follow (exemple using PostgreSQL as an exemple):
db-config-birt.xml (or whatever name)
<?xml version="1.0"?>
<DataTools.ServerProfiles version="1.0">
<profile autoconnect="No" desc="" id="uuid" name="MyPostgreSQL"
providerID="org.eclipse.birt.report.data.oda.jdbc">
<baseproperties>
<property name="odaDriverClass" value="org.postgresql.Driver"/>
<property name="odaURL" value="jdbc:postgresql://XX:5432/YY"/>
<property name="odaPassword" value="zzz"/>
<property name="odaUser" value="abc"/>
</baseproperties>
</profile>
</DataTools.ServerProfiles>
The key points here are:
The xml MUST start with <?xml version="1.0"?> (or <?xml version="1.0" encoding="UTF-8" standalone="no"?> but when I was using it, I have having a parsing exception while deploying on Tomcat)
The properties keys MUST be odaDriverClass, odaURL, odaPassword, odaUser (order doesn't matter)
This file must have the right to be accessed, for e.g. chmod 664 this file
If any of the 2 conditions above aren't met, Birt will throw a laconic :
org.eclipse.birt.report.engine.api.EngineException: An exception occurred during processing. Please see the following message for details:
Cannot open the connection for the driver: org.eclipse.birt.report.data.oda.jdbc.
org.eclipse.birt.report.data.oda.jdbc.JDBCException: Missing properties in Connection.open(Properties). ;
org.eclipse.datatools.connectivity.oda.OdaException: Unable to find or access the named profile (MyPostgreSQL) in profile store path (/opt/tomcat/mytomcat/conf/db-config-birt.xml). ;
org.eclipse.datatools.connectivity.oda.OdaException ;
Then in the report (myreport.rptdesign), in the XML of it, the datasource must then look like that:
myreport.rptdesign (or whatever name)
<data-sources>
<oda-data-source extensionID="org.eclipse.birt.report.data.oda.jdbc" name="MyPostgreSQL" id="320">
<property name="OdaConnProfileName">MyPostgreSQL</property>
<property name="OdaConnProfileStorePath">/opt/tomcat/mytomcat/conf/db-config-birt.xml</property>
</oda-data-source>
</data-sources>
Obviously, you will adapt the OdaConnProfileStorePath to suit your needs