Need clear info on creating custom statistics in gemfire? - gemfire

I need to capture some of the metrics which is not currently provided by Gemfire in the Mbeans. And i came across the statistics to capture the same, but there is no clear docs regarding developing custom statistics . Please let me know, how to enhance custom statistics for capturing additional data in gemfire.

The javadocs for StatisticsFactory has some good information on creating a custom statistics.
You can also look at some examples of statistics usage in the geode codebase. For example, here a class which wraps an instance of Statistics and provides public methods to update it: FileSystemStats.java.

Related

NestJS Schema First GraphQL Serialization

I've done some research into the subject of response serialization for NestJS/GraphQL. There's some helpful information to be found here, but the documentation seems to be completely focused on a code first approach. My project happens to be taking schema first approach, and from what I've read across a few sources, the option available for a schema-first project would be to implement interceptors for the resolvers, and carry out the serialization there.
Before I run off and start writing these interceptors, my question is this; is there any better options provided by nestjs to implement serialization for a schema first approach?
If it's just transformation of values then an interceptor is a great tool for that. Everything shown for "code-first" should work for "schema-first" in terms of high level ideas of the framework (interceptors, pipes, filters, etc). In fact, once the server is running, there shouldn't be a distinguishable difference between the two approaches, and how they operate. The big thing you'd need to be concerned with is that you won't be easily able to take advantage of class-transformer and class-validator because the original class definitions are created via the gql-codegen, but you can still extend those types and add on the decorators necessary if you choose.

is there anyway I can get information about vkimage?

I have a VkImage, is there any way to get some part of the createInfo which used to create this image? For example, the arrayLayers, mipLevels, extent and format? It seems vkGetImage* does not have this functionality at all?
Any information you might query about a VkImage is information which, at one point, you must have had because you gave it to Vulkan. Making a Vulkan driver implementation keep track of information you have is a waste of memory and a possible source of driver bugs. Therefore, Vulkan expects that, if you find some information about a VkImage to be important, then you will store that information alongside the image after its creation.
In general, Vulkan has no querying APIs for any information which you yourself provided for any object.

#EnableRedisRepositories - What is the use of in Spring Data Redis?

I search a lot over web to get more practical usage of #EnableRedisRepositories, but I did not found any. Even in my Spring Boot + Spring Data Redis example, I removed #EnableRedisRepositories but still I did not understood what difference it make, still I can see data is persisting into DB and retrieving fine.
Can somebody please clarify ?
I went through this annotation, but not every clear..
Annotation to activate Redis repositories. If no base package is configured through either {#link #value()},
{#link #basePackages()} or {#link #basePackageClasses()} it will trigger scanning of the package of annotated class.
It lets Spring scan your packages for repository classes/interfaces and then use Redis as the storage to persist your objects to - instead of a classic relational database.
Spring Data docs tell us:
NoSQL storage systems provide an alternative to classical RDBMS for horizontal scalability and speed. In terms of implementation, key-value stores represent one of the largest (and oldest) members in the NoSQL space.
The Spring Data Redis (SDR) framework makes it easy to write Spring applications that use the Redis key-value store by eliminating the redundant tasks and boilerplate code required for interacting with the store through Spring’s excellent infrastructure support.

GemFire model which can insert data to GemFire via microservice

I am trying to find implementation of GemFire and I am in search of model which can insert data to GemFire as well. I am getting PDX serialization error using CacheWriter.
There are plenty of example in both the Pivotal GemFire and Spring space.
As you may know, Pivotal GemFire is based on the open source Apache Geode, which has a few How-To articles on the Wiki. There is an article on Geode In 5 Minutes that lead you to a few other places.
With Spring Data GemFire, there are plenty of examples, starting with the Spring GemFire Examples GitHub project.
I also have several other examples in my own GitHub account, such as...
The Contacts Application Reference Implementation (RI). This is the most current, up-to-date set of examples since I use these as a single source of truth for conference talks as well as to showcase the latest developments in GemFire with Spring.
I also have an entire GitHub Repository (spring-gemfire-tests) dedicated to reproducing/understanding customer issues, building prototypes or proof-of-concepts, and so on.
Last, but certainly not least, you can review the SDG test suite, which has many of tests that can be used as examples for putting/getting data to/from GemFire using Spring along with configuring PDX. #2 above is also a good resource for this as well.
If you are looking for something in particular, pertaining to your UC, let use know what you UC is and perhaps we/I can direct you better.
Hope this helps!
-John

AWSDynamoDBObjectMapper or AWSDynamoDB?

The AWS documentation is seemingly endless, and different pages tell me different things. For example, one page tells me that AWSDynamoDBObjectMapper is the entry point to working with DynamoDB, while another tells me that AWSDynamoDB is the entry point to working with DynamoDB. Which class should I be using? Why?
EDIT: One user mentioned he didn't understand the question. To be more clear, I want to know, in general, what the difference is between using AWSDynamoDB and AWSDynamoDBObjectMapper as entry points to interfacing a DynamoDB.
Doc links for both:
AWSDynamoDB
AWSDynamoDBObjectMapper
Since both can clearly read, write, query, and scan, you need to pick out the differences. It appears to me that the ObjectMapper class supports the concept of mapping an AWSDynamoDBModel to a DB vs. directly manipulating specific objects (as AWSDynamoDB does). Moreover, it appears that AWSDynamoDB also supports methods for managing tables directly.
I would speculate that AWSDynamoDB is designed for managing data where the schema is pre-defined on the DB, and AWSDynamoDBObjectMapper is designed for managing data where the schema is defined by the client.
All of this speculation aside though, the most important bit you can glean from the documentation is:
Instead of making the requests to the low-level DynamoDB API directly from your application, we recommend that you use the AWS Software Development Kits (SDKs). The easy-to-use libraries in the AWS SDKs make it unnecessary to call the low-level DynamoDB API directly from your application. The libraries take care of request authentication, serialization, and connection management. For more information, go to Using the AWS SDKs with DynamoDB in the Amazon DynamoDB Developer Guide.
I would recommend this approach rather than worrying about the ambiguity of class documentation.