Ordering Pointer Field in Parse Server with Rest API - parse-server

I have problem on parse server with rest api. when i want to order pointer field. for example I have class SecurityAttendance and 'cluster' as pointer field. And in Cluster class available column like name.
how can I order pointer fields? i have tried.
include=cluster&order=cluster.name
its not work
any solution? thanks

Related

How to get the base type of an array type in portable JDBC

If you have a table with a column whose type is SQL ARRAY, how do you find the base type of the array type, aka the type of the individual elements of the array type?
How do you do this in vendor-agnostic pure JDBC?
How do you do this without fetching and inspecting actual row data? Equivalently: what if the table is empty?
Similar questions were asked here:
How to get array base type in postgres via jdbc
JDBC : get the type of an array from the metadata
However, I am asking for a vendor-agnostic way through the JDBC API itself. I'm asking: How is one supposed to solve this problem with vendor-agnostic pure JDBC? This use case seems like a core use case of JDBC, and I'm really surprised that I can't find a solution in JDBC.
I've spent hours reading and re-reading the JDBC API javadocs, and several more hours scouring the internet, and I'm greatly surprised that there doesn't seem be a proper way of doing this via the JDBC API. It should be right there via DatabaseMetaData or ResultSetMetaData, but it's apparently not.
Here are the insufficient workarounds and alternatives that I've found.
Fetch some rows until you get a row with an actual value for that column, get the column value, cast to java.sql.Array, and call getBaseType.
For postgres, assume that SQL ARRAY type names are encoded as ("_" + baseTypeName).
For Oracle, use Oracle specific extensions that allow getting the answer.
Some databases have a special "element_types" view which contains one row for each SQL ARRAY type that is used by current tables et al, and the row contains the base type and base type name.
My context is that I would like to use vendor-supplied JDBC connectors in spark in cloud in my company product, and metadata discovery becomes an important thing. I'm also investigating the feasibility of writing JDBC connectors myself for other data sources that don't have a JDBC driver nor spark connector yet. Metadata discovery is important so that one can define the Spark InternalRow and Spark-JDBC data getters correctly. Currently, Spark-JDBC has very limited support for SQL ARRAY and SQL STRUCT, but I managed to provide the missing bits with a day or two of coding, but during that process I hit this problem which is blocking me. If I have control over the JDBC Driver implementation, then I could use a kludge (i.e. encode the type information in the type name, and in the Spark JdbcDialect, take the type name and decode it to create the Catalyst type). However, I want to do it in the proper JDBC way, and I ideally I want to do it in a way that some other vendor-supplied JDBC drivers will support.
PS: It took me a surprising amount of time to locate DatabaseMetaData.getAttributes(). If I'm reading this right, this can give me the names and types of the fields/attributes of a SQL STRUCT. Again, I'm very surprised that I can get the names and types of the fields/attributes of a SQL STRUCT in vendor-agnostic pure JDBC but not get the base-type of a SQL ARRAY in vendor-agnostic pure JDBC.

Understanding GraphQL

I started experimenting with the GraphQL wp api.
I am querying the menus. As for the documentation, the query is very long
I would expect that querying
{
menus
}
only would bring about all the data nested in menus, it does not.
Why is this? What is the way to getting all nested data in an object as to see what's in there?
Thank you for your time
The rule is that every "leaf" fields in a GraphQL query should be a Scalar something like Int , Boolean , String etc. So if the meuns field in the root Query type is a Scalar , it is a valid query and will return you something.
If not , you have to continue navigating the Menu type and pick the fields that you want to include in the GraphQL query such as :
{
menus {
id
createdDate
}
}
There is no wildcard that can represent all fields in current GraphQL spec.You have to explicitly declare all fields you want to select in the query.By looking at the GraphQL schema, you can know the available fields for each type. One of the tips is to rely on the GraphQL introspection system .It basically means that you can use some of the GraphQL client such as Altair, Graphiql, or GraphQL Playground etc. which most of them will have some auto-suggest function that will guide you to compose a query by suggesting you what fields are available to be included for a type .
P.S. A similar analogy to SQL is that there is no select * from foo , you have to explicitly define the columns that you want to select in the select clause such as select id,name,address from foo.
If you keep in mind that you're getting back a JSON object, you can think of your GraphQL query as defining the left-hand side of the response (this is intentional in how it was designed), e.g. just the keys. So unless there are null values, what you get back should exactly match the shape of the query.
If you want to see what can be queried, you need access to the schema itself. If it's a schema provided by someone else (looks like WordPress in this case), they should also have provided the means to explore and understand it.
That is the main feature of GraphQL, you can specify what data you need from a query. And because of that, you can't just query menus in that way, you need to specify every nested field in menus you need and only then it'll work :)

RavenDB return IDynamicJsonObject Or RavenJObject

I am using RavenDb with the latest client and also for server side. It's really weird that when I use Load(string id), for the first time, it returns a RavenJObject. But with the same id, the second time, it returns a IDynamicJobject.
Can someone help me to explain it?
Thanks
The issue is that you are likely creating the docs manually, so they don't have the Raven-Clr-Type metadata value.
Because of that, we don't know what the type is, and use dynamic, since you didn't provide a type for us.
The 2nd time, we already had a type, and you saved it, so we have the type metadata and we can infer what the type is.

How do i use cache in kettle pentaho?

I am processing data, where i get some information from rest api, based on the value of a field.
Now, value may repeat for that field and if I already have fetched the data for that value, from REST, i would like to reuse that value and saving an API call (slowest operation in the transformation).
is is possible? if yes, how?
Regards
Ajay
#RFVoltini you are right, maybe we could try to setup a H2 db server for this purpouse: http://type-exit.org/adventures-with-open-source-bi/2011/01/using-an-on-demand-in-memory-sql-database-in-pdi/
other option is using memcached in java : http://sacharya.com/using-memcached-with-java/
I've did an example transformation, that gets from a webservice country names by country codes. I've used the idea where you just need to get from the webservice the distinct country codes/names then lookup them on your main pipeline.
Take a look at this example: https://docs.google.com/open?id=0B-AwXLgq0XmaV0V0cHlfTFZlVUU and see if this method applies to you.

Mongomapper: does "_id" field conflict with "id"?

I have a collection which contains both _id and id field. When I search by id field in mongo client everything is fine. When I search through mongomapper model like: Product.find_by_id(6) or Product.where(:id => 6) it return empty Plucky object and I can see that it looks for an _id field instead of id.
As I understand mongomapper just always using _id, no matter if you specifically want to find something by id.
Is there any work around for it or I'm doing it wrong?
I believe MongoMapper treats id and _id both equally. id is just a friendlier representation of _id.
In your particular case, is there any reason that you need to have the id field as well? I'd recommend changing that, particularly if there is another more descriptive name which would fit. If you are actually using the id field as a unique identifier (which it sounds like you might be), the best approach would probably be to store it in the _id field instead. As you will already be aware, this is required on all MongoDB documents and can either be specified by you (your application), or added on later by your driver outside of the scope of your application code.
Hope that helps.
It could be caused by this issue (https://github.com/jnunemaker/mongomapper/issues/195) if you ever had an instance with a key of "id." Mongo remembers every key from every instance, unless you clear the key explicitly.