Custom Object / Instance name is not reflecting in server for LwM2M custom Object - lwm2m

I have created few custom objects within the range of (26241-32768). As per OMA registry spec, we need not have to register the custom objects which are in the range of 26241-32768. I have used lwm2m Leshan 1.0 server and lwm2m IOWA 1.0 client. When these custom objects get displayed in the Leshan server UI, the name of an object and the instance is not displayed properly. As shown in the attached image, Object name and Instance name is not getting displayed properly. Is it possible to display the name of the Object and Object Instance in server UI without registering the custom objects with OMA? Is there any other possibilities of displaying the name in the server?

Since the Servers have no way to know the representation of an object not registered, they have to be provisioned with the XML to parse their contents and in return to display the information properly on their Web Interface.
For Leshan Server:
• You need to run your own instance of the Server. When you launch the Server, you have the possibility to provide a folder where your XML are located such as:
o java -jar ./leshan-server-demo.jar --modelsfolder XML_PATH
For Connecticut:
• You need to send your XML files to Ioterop, and they will provision their Server with them. Today, there is no public API for it.

Related

how to keep properties file outside the mule code in mulesoft

i have defined a dev.properties file for the mule flow.where i am passing the username and password required to run the flow.This password gets updated everymonth.So everymonth i have to deploy the code to the server after changing the password.Is there a way , where we can keep the properties file outside the code in mule server path.and change it when required in order to avoid redeployment.
One more idea is to completely discard any usage of a file to pickup the username and password.
Instead try using a credentials providing service, like a http requestor which is collecting the username and password from an independent API(child API/providing service).
Store it in a cache object-store of your parent API (the calling API). Keep using those values, unless the flow using them fails or if the client needs to expire them after a month. Later simply refresh them.
You can trigger your credentials providing service using a scheduler with a Cron expression having Monthly Triggers.
No, because even if the properties file is outside the application, properties are loaded on application deployment. So you would need to restart the application anyway to pick up the new values.
Instead you can create a custom module that read the properties from somewhere (a file, some service, etc), assign the value to a variable, and use the variable instead at execution time. Note that some configurations may only be set at deployment time, so variables will not be evaluated as such.
If the credentials are not exposing your application security or data, then you can move them to another config file(place it Outside mule app path). Generate a RAML file which will read & reload the credentials after application deploy/start-up, and store them in cache with timeToLive around 12 hours.
The next time when you have to change Username/Password, change in the file directly and cache will refresh it automatically after expiry time.
Actually not because all the properties secure properties needs to be there at runtime and is it is not there your application will get failed,
There is one way but it’s not best one, instead of editing code you can directly edit secure property I.e username and password in your case directly in cloudhub runtime manager properties tab.
After editing just apply changes then api will restart automatically and will deploy successfully

How to switch between different properties files based on request at runtime?

Currently I read properties file by defining a global element like;
> <configuration-properties doc:name="Local Configuration Properties"
> doc:id="899a4f41-f036-4262-8cf2-3b0062dbd740"
> file="config\local_app.properties" />
But this is not enough for me
when try to deal different clients dynamically.
Usecase
I need to pick right configuration file when request comes in. That is, for different clients I have different properties file.( their credentials and all different). When request is received from listener, i'll check with clientid header and based on that value, i'll pick right configuration file. My properties files are added to different location.(Doing deployment through openshift.) Not within mule app. So, we don't need to redeploy the application each time, when our application supports new client.
So, in this case, how to define ? and how to pick right properties file?
eg:
clientid =google, i have properties file defined for google-app.properties.
clientid=yahoo, i have properties file defined for yahoo-app.properties.
clientid=? I'll add properties file ?-app.properties later
Properties files are read deployment time. That means that if you change the values, you to redeploy the application to read the new ones. System properties need a restart of the Mule Runtime instance to be set. And Runtime Manager properties need a restart of the application. In any case the application will restart. Properties can not be used as you want.
There is no way to use configuration properties dynamically like that. What you could do is to create a module using Mule SDK that read properties files and returns the resulting set of properties, so you can assign the result to a variable, and use the values as variables. You will need to find a way to update the values. Maybe set a flow with a scheduler to read the values with a fixed frequency.

Is it possible to get a list of available instance types in a specific availability zone for AWS RDS?

I`m trying to modify RDS DB Instance launched in vpc by AWS API using ModifyDBInstance action. I`m not change instance type (instance launched with db.m1.small type and not canged), but I`m reciving message:
AWS Error. Request ModifyDBInstance failed. Cannot modify the instance class because there are no instances of the requested class available in the current instance's availability zone. Please try your request again at a later time. (RequestID: xxx).
According to AWS docs
To determine the instance classes that are available for a particular DB engine, use the DescribeOrderableDBInstanceOptions action. Note that not all instance classes are available in all regions for all DB engines.
So I have two quastions:
Is it possible to get by API only Instance types available in specific AZ? In DescribeOrderableDBInstanceOptions actions responce I have many instance types, which not available. I`m also checked responce of DescribeReservedDBInstancesOfferings action, and it`s doesn`t fit.
Why it possible to launch DBInstance with some instance type, but have troubles on trying to modify it DBInstance without changing instance type?
Any ideas?
It looks like one of the return values listed in this AWS RDS CLI call is AvailabilityZones
AvailabilityZones -> (list)
A list of Availability Zones for the orderable DB instance.
(structure)
Contains Availability Zone information.
This data type is used as an element in the following data type:
OrderableDBInstanceOption
Name -> (string)
The name of the availability zone.
Generally the CLI allows your to filter but it is not support for rds for some reason or another.
--filters (list)
This parameter is not currently supported.
The API returns the object OrderableDBInstanceOption which also has the AZ listed.
To answer #2 is that AWS does have capacity issues from time to time, like any other cloud or service provider, they are generally better at handling it than others. What AZ are you trying to use and size of instance? If you continue to have issues I would open a support ticket with AWS.
The easiest way is to select any one of the Rds instance you have in your infrastructure and click on Modify and there will be one option like dbInstanceTypes it is like drop down where you can find available instance types available in the particular region.

WebSphere 8.5 Shared Java Custom Properties

I have a clustered environment that has two WebSphere Application Servers.
In side the Process definition > Java Virtual Machine > Custom properties section for my servers I store several properties.
Is there any way to share values in this section between two app servers?
I don't think you can share JVM custom properties among multiple servers. However, you can create WebSphere variables (Environment > WebSphere Variables). When you create a variable there, you can choose a scope that will allow the variable to apply to multiple servers. That variable won't work the same as a JVM custom property, so what happens next depends on how the variable is used. If you need to access the variable inside the application, see this link:
http://www.slightlytallerthanaverageman.com/2007/04/02/access-websphere-variables-in-j2ee-applications/
If you need it to act like a JVM custom property, WAS might do variable expansion on JVM custom proerties. Say you defined a WebSphere variable named "WAS_VAR_X" and needed that variable to be set as a JVM property named "jvmPropertyX." You might be able to define the JVM custom property with:
Name: jvmPropertyX
Value: ${WAS_VAR_X}
I haven't tried this myself, so if you try it and it doesn't work, reply so I can edit the answer.
Maybe you can use database/cache(redis, etc) storing the share value.
When the app startup, load properties from database/cache(redis, etc).
Also you can change the properties and the other server can load new shared values.

Resources accessible via the DataFactory Interface

Looking into org.pentaho.reporting.engine.classic.core.DataFactory and more specifically into the initialize method (which was formerly part of the ContextAwareDataFactory) I was wondering what resources/what part of the context is accessible via the interface, e.g. via the ResourceManager.
For instance, is it possible to get access to "resources" defined in a report, e.g. data sources or formulas (aside from the report parameters which are accessible via the query Method)? Thanks in advance!
The resource-manager allows you to access raw data stored in the zip/prpt file - but we do not allow you to access the parsed report or any of its (parsed) components.
With the resource-manager you can for instance load embedded xml- or other files and parse them as part of the query process.
If you were to do something extra nasty that requires access to the report definition and its content, then you could gain access via a wild hack using subreports:
Create a new report-function (via code). In that function, override the
"reportInitialized" method to get the report instance
("event.getState().getReportDefinition()"). Store that object in the
function and return it via the "getValue()" method of your function.
Pass that function's result as parameter to a subreport.
The subreport's data-factories can now access the parameter,
which is the report object returned by the master-report's function.
This process is intentionally complex and not fun. We are strongly against using the report in the process of querying data.
P.S: If you intent to access a SQL/MQL/MDX datasource from a scriptable datasource, then simply use the script-extensions that are built into these datasources since PRD-3.9.
http://www.sherito.org/2011/11/pentaho-reportings-metadata-datasources.html