I am trying to write to gemfire server in region 'trade'.
My class is like :
public class TradeDetails{
String exchange;
String Product;
String Account;
String Quantity;
//getter and setter }
I have deployed the jar in the gfsh console.
The query I am running on gfsh console is
put --key=1 --value=('exchange':'xyz','Product':'abc','Account':'xyz','Quantity':'123L') --region=/trade --value-class=model.TradeDetails
But I am getting an error
Couldn't convert JSON to Object of type class model.TradeDetails.
What could be the cause?
Well, according to the GemFire documenation, your Gfsh put command appears to be correct...
put --key=1 --value=('exchange':'xyz','Product':'abc','Account':'xyz','Quantity':'123L') --region=/trade --value-class=model.TradeDetails
However, your key value 1 is a bit suspect. If you used a key constraint of java.lang.Long on your "/trade" Region then you also need to specify the --key-class option on the put.
I was able to successfully perform the following...
$ gfsh
_________________________ __
/ _____/ ______/ ______/ /____/ /
/ / __/ /___ /_____ / _____ /
/ /__/ / ____/ _____/ / / / /
/______/_/ /______/_/ /_/ v8.2.0
Monitor and Manage GemFire
gfsh>connect
Connecting to Locator at [host=localhost, port=10334] ..
Connecting to Manager at [host=10.99.199.3, port=1099] ..
Successfully connected to: [host=10.99.199.3, port=1099]
gfsh>list members
Member Count : 1
Coordinator : SpringGemFireDataServer (10.99.199.3(SpringGemFireDataServer:77179)<v0>:47312)
Name | Id
----------------------- | ----------------------------------------------------
SpringGemFireDataServer | 10.99.199.3(SpringGemFireDataServer:77179)<v0>:47312
gfsh>describe member --name=SpringGemFireDataServer
Name : SpringGemFireDataServer
Id : 10.99.199.3(SpringGemFireDataServer:77179)<v0>:47312
Host : 10.99.199.3
Regions : People
PID : 77179
Groups :
Used Heap : 229M
Max Heap : 3641M
Working Dir : /Users/jblum/pivdev/spring-data-gemfire-tests-workspace/spring-data-gemfire-tests/target
Log file : /Users/jblum/pivdev/spring-data-gemfire-tests-workspace/spring-data-gemfire-tests/target
Locators : localhost[10334]
Cache Server Information
Server Bind : localhost
Server Port : 40404
Running : true
Client Connections : 0
gfsh>list regions
List of regions
---------------
People
gfsh>describe region --name=/People
..........................................................
Name : People
Data Policy : partition
Hosting Members : SpringGemFireDataServer
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ---- | -----
Region | size | 0
gfsh>
gfsh>put --region=/People --key=1 --key-class=java.lang.Long --value=('firstName':'Jon','lastName':'Doe') --value-class=org.spring.data.gemfire.app.beans.Person
Result : true
Key Class : java.lang.Long
Key : 1
Value Class : org.spring.data.gemfire.app.beans.Person
Value
------
<NULL>
gfsh>
gfsh>describe region --name=/People
..........................................................
Name : People
Data Policy : partition
Hosting Members : SpringGemFireDataServer
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ---- | -----
Region | size | 1
Note, my "/People" Region has a key type of java.lang.Long and value type of org.spring.data.gemfire.app.beans.Person.
Although, when I attempted to read "Jon Doe" back out, Gfsh puked...
gfsh>get --region=/People --key=1 --key-class=java.lang.Long --value-class=org.spring.data.gemfire.app.beans.Person
Exception occurred. null
However, I did go onto create a (Spring Boot-based) GemFire client cache application (with a base configuration of SpringGemFireClient) that pulled the Person back out successfully.
Person is [Jon Doe]
You might try annotating your model.TradeDetails application domain type with the Jackson mapping annotations, though I am not certain Gfsh actually uses them to perform the mapping since I think (when I lasted checked) Gfsh was not using Jackson. But, it wouldn't hurt either. Either way.
Note, my Server was started with this SpringGemFireDataServer, which is based on SpringGemFireServer.
Hope this helps (a little :-).
Cheers!
John
Related
I'm trying to set up a database into an elastic pool using Bicep. So far I've created a sql server and a related elastic pool successfully. When I try to then create a database that refers to these parts I get unstuck with a helpful error from Azure
'The language expression property array index '1' is out of bounds.'
I'm really unclear on what settings I need to put in the SKU and other properties of the sqlServer configuration. So far I have the following:
resource sqlDatabase 'Microsoft.Sql/servers/databases#2022-05-01-preview' = {
parent: sqlServer
name: databaseName
location: location
sku: {
name: databaseSku
}
properties: {
elasticPoolId: elasticPoolId
collation: collation
maxSizeBytes: maxDatabaseSizeInBytes
catalogCollation: collation
zoneRedundant: zoneRedundant
readScale: 'Disabled'
requestedBackupStorageRedundancy: 'Zone'
}
}
I want to use the StandardElastic pool and I've tried passing that as the databaseSku and I want to use 50 DTU's as the limit. But there is capacity, family, size and tier and from powershell I get these sorts of options:
Sku Edition Family Capacity Unit Available
------------ ---------------- -------- ---------- ------ -----------
StandardPool Standard 50 DTU True
StandardPool Standard 100 DTU True
StandardPool Standard 200 DTU True
StandardPool Standard 300 DTU True
So how do I map my sql database onto my sql server on that pool using the 50 DTU StandardPool settings? Capacity appears to be a string as well on this template!
I found out that firstly you don't supply an sku to the sql database as it inherits the SKU information from the pool (which makes sense). Secondly that in my reference to the elastic pool above I was using the following syntax
resource elasticPool 'Microsoft.Sql/servers/elasticPools#2022-05-01-preview'
existing = {
name: 'mything-pool'
}
And had excluded the PARENT for the pool, so the correct reference to the pool would have been
resource elasticPool 'Microsoft.Sql/servers/elasticPools#2022-05-01-
preview' existing = {
name: 'mything-pool'
parent: **dbServer**
}
Which then fixed my obscure error
I tried READ REPORT for global class but it's not working. I need to read global class' source code into table.
I found SEO_METHOD_* FM, but those only returned metadata about class, not its source code.
Is there any FM or method similar to READ REPORT but for global classes ?
Thank you for your help.
All ABAP code is stored in the table REPOSRC, reports, function modules, class pools, etc., in "include programs". This table may only be read via the ABAP statement READ REPORT.
You need to know what are the names of these include programs for a class pool.
For a class pool named ZCL_X, the ABAP source code is stored in these include programs:
ZCL_X=========================CS : this include contains the whole source code, but only if it has been changed via the source-based editor or via Eclipse ADT.
ZCL_X=========================CP : main code, which lists all or most of next include programs
** NB: CP starts always at the 31st character, all characters between class name and 31st character are to be replaced with =. Example: if the class pool is named ZCL_XXXXX, the include is named ZCL_XXXXX=====================CP.
ZCL_X=========================CU : public section
ZCL_X=========================CI : private section
ZCL_X=========================CO : protected section
ZCL_X=========================CM+++ : methods
** +++ is a 3-characters code corresponding to a method as defined in table TMDIR. The column METHODNAME contains the method name and METHODINDX contains an integer used to build +++, examples:
** 1 to 9 : 001 to 009
** 10 to 35 : 00A to 00Z
** 36 to 45 : 010 to 019
** 46 to 71 : 01A to 01Z
** 72 to 81 : 020 to 02Z
** etc.
ZCL_X=========================CCDEF : local class definitions
ZCL_X=========================CCMAC : macros
ZCL_X=========================CCIMP : local class implementations
ZCL_X=========================CCAU : local test classes
and more ...
Use CL_RECA_RS_SERVICES, method GET_SOURCE like this:
CALL METHOD cl_reca_rs_services=>get_source
EXPORTING
id_objtype = 'CLAS'
id_objname = 'CL_SALV_BS_RUNTIME_INFO'
IMPORTING
et_source = DATA(source)
EXCEPTIONS
not_found = 1
others = 2
.
I have configured Liferay to use LDAP server which works fine as long as Import is enabled.
As soon as I switch on Export enabled option,and user tries to login it throws exception.Strangely the user from Liferay is exported to LDAP server.
Caused by: javax.naming.directory.SchemaViolationException: [LDAP:
error code 67 - NOT_ALLOWED_ON_RDN: failed for MessageType :
MODIFY_REQUEST_Message ID : 6_ Modify Request_ Object :
'cn=johndoe+mail=johndoeldap#liferay.com+sn=doe,dc=example,dc=com'_
Modification[0]_ Operation : replace_
Modification_sn: doe Modification1_
Operation : replace_ Modification_sn: doe
Modification2_ Operation : replace_
Modification_givenName: johndoe Modification3_
Operation : replace_ Modification_mail:
johndoeldap#liferay.com Modification[4]_
Operation : replace_ Modification_cn: doe
doeorg.apache.directory.api.ldap.model.message.ModifyRequestImpl#32d7606a:
ERR_62 Entry
cn=johndoe+mail=johndoeldap#liferay.com+sn=doe,dc=example,dc=com does
not have the cn attributeType, which is part of the RDN";]; remaining
name
'cn=johndoe+mail=johndoeldap#liferay.com+sn=doe,dc=example,dc=com'
[Sanitized]
Post configuring LDAP on liferay,I am able to correctly connect to LDAP and view users too.
Below is the user mapping configuration
Below is export and Group mapping config
LDAP config
I got it sorted up by correcting the User field mapping.
While importing,data was imported from LDAP without any exceptions but on the other hand,while exporting the data to LDAP,there was duplicacy in terms of 'cn' attribute being used multiple times for mapping(both for Screen name and Full name),which must have been used uniquely.So even though the user data is exported from liferay,yet this led to SchmenaViolationException and did not allow user to login in to portal.
Hi all, hoping someone can assist me with some queries/configuration for the use of the Geode Redis Adapter. I'm having some difficulty ascertaining how/whether I can configure a number of Redis servers within my Geode cluster to function in a high availability setup.
I'm very new to Geode, but understand that in a traditional Geode application, the client interacts with a locator process to access data from the cluster and balance load. Given that the aim of this adapter is to function as a drop-in replacement for Redis (i.e. no change required on the client) I imagine it functions somewhat differently.
Here is what I have tried so far:
I have built from source according to this link and successfully got the gfsh cli up on 2 CentOS 7 VMs:
192.168.0.10: host1
192.168.0.15: host2
On host1, I run the following commands:
gfsh>start locator --name=locator1 --bind-address=192.168.0.10 --port=10334
gfsh>start server --name=redis --redis-bind-address=192.168.0.10 --redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_REDUNDANT
On host2, I run the following command:
gfsh>start server --name=redis2 --redis-bind-address=192.168.0.15 --redis-port=11211 --J=-Dgemfireredis.regiontype=PARTITION_REDUNDANT --locators=192.168.0.10[10334]
On host1, I examine the current configuration:
gfsh>list members
Name | Id
-------- | -------------------------------------------------
locator1 | 192.168.0.10(locator1:16629:locator)<ec><v0>:1024
redis2 | 192.168.0.15(redis2:6022)<ec><v2>:1024
redis | 192.168.0.10(redis:16720)<ec><v1>:1025
gfsh>list regions
List of regions
-----------------
__HlL
__ReDiS_MeTa_DaTa
__StRiNgS
For each of the regions, I can see both server redis & redis2 as Hosting Members - e.g.
gfsh>describe region --name=__StRiNgS
..........................................................
Name : __StRiNgS
Data Policy : normal
Hosting Members : redis2
redis
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----- | -----
Region | size | 0
| scope | local
At this point, I turned to the redis-cli for some testing. Given the previous output, my expectation was that if I set a key on one server, I should be able to read it back from the other server:
192.168.0.10:11211> set foo 'bar'
192.168.0.10:11211> get foo
"bar"
192.168.0.15:11211> get foo
(nil)
gfsh>describe region --name=__StRiNgS
..........................................................
Name : __StRiNgS
Data Policy : normal
Hosting Members : redis2
redis
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----- | -----
Region | scope | local
Non-Default Attributes Specific To The Hosting Members
Member | Type | Name | Value
------ | ------ | ---- | -----
redis2 | Region | size | 0
redis | Region | size | 1
As you can see, a query against host2 for the key added on host1 returned (nil). I'd greatly appreciate any help here. Is it possible achieve what I'm aiming for here or does the Redis adapter only allow you to scale out a single server?
This may not be an answer, but it is probably too long for a comment.
I am not familiar with the specific Geode Redis Adapter you are talking about here. But from my experience with Gemfire/Geode, there are things you may want to check:
You started the first host without locators param, I am not sure whether this will cause any problem with cluster formation. There are two ways in Gemfire to form a cluster: by specifying mcast port or by specifying locators.
Scope of the region you are inspecting looks wrong. "local" will not replicate any updates. When you set it up correctly, it should show up as DISTRIBUTED_NO_ACK / DISTRIBUTED_ACK / GLOBAL I suppose.
Hope this helps
Xiawei is correct, a scope "local" regions will not replicate the entry on other members. The workaround for this could have been to just create a region named __StRiNgS from gfsh, but since region names starting with two underscores are for internal use only, that's not possible.
I have filed this issue https://issues.apache.org/jira/browse/GEODE-1921 to fix the problem. I also attached a patch for this issue. With the patch applied I see that the __StRiNgS region now is PARTITION.
gfsh>start locator --name=locator1
gfsh>start server --name=redis --redis-port=11211
gfsh>list regions
List of regions
-----------------
HlL
StRiNgS
__ReDiS_MeTa_DaTa
gfsh>describe region --name=/StRiNgS
..........................................................
Name : StRiNgS
Data Policy : partition
Hosting Members : redis
Non-Default Attributes Shared By Hosting Members
Type | Name | Value
------ | ----------- | ---------
Region | size | 0
| data-policy | PARTITION
On host1:
start locator --name=locator1 --bind-address=192.168.0.10 --port=10334
start server --name=redis --redis-bind-address=192.168.0.10 --redis-port=11211 --J=-Dgemfireredis.regiontype=REPLICATE
NOTE: Yoy have to use regiontype as "REPLICATE" if you wish the data to be replicated from one region to another.
On host2:
start server --name=redis2 --redis-bind-address=192.168.0.15 --redis-port=11211 --J=-Dgemfireredis.regiontype=REPLICATE --locators=192.168.0.10[10334]
https://geode.apache.org/docs/guide/11/developing/region_options/region_types.html
We have a Windows Azure Federated database, which we need to turn into a normal database (due to Federations being retired shortly).
Having read through copious amounts of documentation and tried various things, the answer seems to be the ALTER FEDERATION ... SWITCH OUT AT command:-
https://msdn.microsoft.com/library/dn269988.aspx
Removes all federation metadata and constraints from the federation member database. After execution, the federation member is a standalone database.
The format for the command is given as:-
ALTER FEDERATION federation_name SWITCH OUT AT ([LOW | HIGH] distribution_name = boundary_value)
LOW or HIGH determines the federation member that will be switched out on the respective side of the given federation boundary_value. The boundary value must correspond to an existing partition value, range-high or range-low, in the existing federation.
and there is a specific example to switch out the Federation with a boundary of 99:-
ALTER FEDERATION CustomerFederation SWITCH OUT AT (LOW cid = 100)
So, taking all of the above information, I queried the Federation values, which returned the following:-
SELECT * FROM sys.federations
federation_id : 65536
name : CustomerFederation
SELECT * FROM sys.federation_members
federation_id : 65536
member_id : 65536
SELECT * FROM sys.federation_distributions
federation_id : 65536
distribution_name : cid
distribution_type : RANGE
system_type_id : 127
max_length : 8
precision : 19
scale : 0
collation_name : NULL
user_type_id : 127
boundary_value_in_high : 1
SELECT * FROM sys.federation_member_distributions
federation_id : 65536
member_id : 65536
distribution_name : cid
range_low : -9223372036854775808
range_high : NULL
However, no matter what value I try to use for boundary_value, I get the following:-
Msg 45026, Level 16, State 1, Line 1
ALTER FEDERATION SWITCH operation failed. Specified boundary value does not exist for federation distribution cid and federation CustomerFederation.
I've tried using the range_low value:-
ALTER FEDERATION CustomerFederation SWITCH OUT AT (LOW cid = -9223372036854775808)
ALTER FEDERATION CustomerFederation SWITCH OUT AT (HIGH cid = -9223372036854775808)
I've also tried one either side of that value, as the example used 100 to SWITCH OUT 99
I've tried using 0, as that's the value I use to connect to the Federation, but that gives the same error, as does -1 and 1, for both LOW and HIGH.
I've also tried specifying to use the Federation Root before running the command:-
USE FEDERATION ROOT WITH RESET
GO
ALTER FEDERATION CustomerFederation SWITCH OUT AT (LOW cid = -9223372036854775808)
I have tried running it from the main database and from the Federation.
Has anyone successfully used the ALTER FEDERATION ... SWITCH OUT AT command and can point me in the right direction please?
After hunting around some more, I found a link to a Federation Migration Utiluty:
https://code.msdn.microsoft.com/vstudio/Federations-Migration-ce61e9c1
Looking over the code, it appeared that the correct command was one I'd already tried:
ALTER FEDERATION CustomerFederation SWITCH OUT AT (HIGH cid = -9223372036854775808)
This time it worked. Not sure why it didn't the first time, possibly something else I'd tried before it had thrown it out.