react-native-maps-super-cluster : How to make multiple type of clusters - react-native-maps

I have managed to make clusters with react-native-maps-super-cluster . My requirement is now to create multiple clusters based on the cluster points category. For example , if i have 10 markers , 5 being one type and the remaining another type. I need 2 clusters with the marers grouped in to two.

Related

determine maximum value of each row with the column name in Python

I have a table which's been attached.
The table includes a different number of model of cars with their EU class (columns are a number of models in each class).
I am trying to identify the maximum value of each model (at each row) by identifying the EU class (column name) in Python.
for example the first row the maximum number goes to Euro 5 with 3677 cars whose model of vehicle is 320 GH.
I tried different commands such as
maxValuesObj = D_high_model_EUstd.loc[D_high_model_EUstd['Model of Vehicle'].idxmax()]
but faced this error "reduction operation 'argmax' not allowed for this type".
I was wondering if anybody can help or suggest to me any solution.
Thanks
You can use pd.idxmax
For example:
df.drop(columns="Model of Vehicle").idxmax(axis=1) # Drop text column since idxmax don't work with them
Or create another dataframe:
pd.concat({
"Model of Vehicle": df["Model of Vehicle"],
"Max":df.drop(columns="Model of Vehicle").idxmax(axis=1)
},axis=1)

Import data from csv into database when not all columns are guaranteed

I am trying to build an automatic feature for a database that takes NOAA weather data and imports it into our own database tables.
Currently we have 3 steps:
1. Import the data literally into its own table to preserve the original data
2. Copy it's data into a table that better represents our own data in structure
3. Then convert that table into our own data
The problem I am having stems from the data that NOAA gives us. It comes in the following format:
Station Station_Name Elevation Latitude Longitude Date MXPN Measurement_Flag Quality_Flag Source_Flag Time_Of_Observation ...
Starting with MXPN (Maximum temperature for water in a pan) which for example is comprised of it's column and the 4 other columns after it, it repeats that same 5 columns for each form of weather observation. The problem though is that if a particular type of weather was not observed in any of the stations reported, that set of 5 columns will be completely omitted.
For example if you look at Central Florida stations, you will find no SNOW (Snowfall measured in mm). However, if you look at stations in New Jersey, you will find this column as they report snowfall. This means a 1:1 mapping of columns is not possible between different reports, and the order of columns may not be guaranteed.
Even worse, some of the weather types include wild cards in their definition, e.g. SN*# where * is a number from 0-8 representing the type of ground, and # is a number 1-7 representing the depth at which soil temperature was taken for the minimum soil temperature, and we'd like to collect these together.
All of these are column headers, and my instinct is to build a small Java program to map these properly to our data set as we'd like it. However, my superior believes it may be possible to have the database do this on a mass import, but he does not know how to do it.
Is there a way to do this as a mass import, or is it best for me to just write the Java program to convert the data to our format?
Systems in use:
MariaDB for the database.
Centos7 for the operating system (if it really becomes an issue)
Java is being done with JPA and Spring Boot, with hibernate where necessary.
You are creating a new table per each file.
I presume that the first 6 fields are always present, and that you have 0 or more occurrences of the next 5 fields. if you are using SQL Server i would approach it as follows,
Query the information_schema catalog to get a count of the fields in
the table. If the count= 6 then no observations are present, if 11
columns ,then you have 1 observation, if 17 then you have 2
observations, etc.
Now that you know the number of observations you can write some SQL
that will loop the over the observations and insert them into a
child table with a link back to a parent table which has the 1st 6
fields.
apologies if my assumptions are way off.
-HTH

How to find the node with outgoing relationship but no relationship on 2nd degree nodes

One question, I have 15000 nodes, with relationships. suppose few of them look like this all have the same label like number but different properties. also the relationship type is the same. Now i want to see if there are any nodes where inter relationship does not exist.
N
|
L--F--M
/
H-B-G
/
A
\C-D
\E
like is there any query which will help me to find the nodes N,M,L related to F but no relationship with each other. start node is A. Can i check that with cipher query? i was trying with the path but it was giving me the path exist between two nodes which i define but not random n number of nodes connected to one x node.
thanks,

Build a Kibana Histogram with buckets dynamically created by ElasticSearch terms aggregation

I want to be able to combine the functionality of the Kibana Terms Graph (be able to create buckets based on uniqueness of values from a particular attribute) and Histogram Graph (separate data into buckets based on queries and then illustrate the date based on time).
Overall, I want to create a Histogram, but I only want to create the Histogram based on the results of one query, not multiple queries like it's being done in the Kibana demo app. Instead, I want each bucket to be dynamically created per unique value of my particular field. For example, consider the following data returned by my query:
{"myValueType": "New York"}
{"myValueType": "New York"}
{"myValueType": "New York"}
{"myValueType": "San Francisco"}
{"myValueType": "San Francisco"}
Also assume that each record has a timestamp field for separating histogram data by date. For that particular date, I want the data to be communicated as a count of 3 into the New York bucket and a count of 2 into the San Francisco bucket. However, I am only able to show a count of 5 for my one linked query. When I configure the Histogram, I am able to specify a field to use for my timestamp, but not to create buckets from. I could've sent a field to compute a total/min/max/mean, but this field would've had to be numeric, so that is not the solution either.
If I were to use a Term Graph to create a pie or bar graph, I am indeed able to separate my data into buckets based on the unique values of my specified field (in this case, "myValueType"), but this would total up the data for all-time, not split up the data by timestamp. Although this is good information to know, it is not ideal because I wouldn't be able to detect trends in my data.
I am looking for a solution that will do one of the following:
Let me dynamically create queries in my Kibana dash board to create "buckets" in a Histogram
Allow me to run an ElasticSearch Terms Aggregation to supposidly split up my data into buckets based on "myValueType" and integrate these results into my Histogram
Customize the JSON of my dashboard, but this doesn't look possible to me
Create my own custom panel, but this is not desirable
Link a Kibana "TopN" query in Kibana. Actually, this has proven to be a work-around for my problem because the TopN query dynamically created one query per unique value/term from the specified fieldName. However, the problem is that I can only link one colour to this TopN query and each unique term will be placed in a bucket that uses a different shade of the colour. Ideally, every bucket in my Histogram will have a completely different colour associated to it. Imagine how difficult it will be to distinguish unique terms as the number of buckets grows.
If all else fails, I make one query per unique value from my search field. This will allow me to have one unique colour per bucket, but as the number of unique terms in the "myValueType" field changes, I need to keep adding/removing queries from Kibana, which can get quite messy.
I'm sure there is someting that I am missing here. Please help me out. Many thanks.
A highly related SOF question: Is it Possible to Use Histogram Facet or Its Curl Response in Kibana
This would be a great feature. It looks like it will be supported in Kibana4, but there doesn't seem to be much more info out there than that.
For reference: https://github.com/elasticsearch/kibana/issues/1249
Maybe a little late but it is actually possible in the newest BETA release.
kibana 4 beta 3 installation download

SSAS how to build up dynamic range dimension

In current cube, I have a calculated measure of average investment dollars. Now I want to create a range dimension table dynamically based on different amount for every department. The table would be something looks like this:
Dim_DollarRange
ID MinRange MaxRange Description
1 1 2 1-2
2 3 5 3-5
3 6 9 6-9
4 10 14 10-14
So basically there are two questions:
1) How to set up dimension table based on measures in cube dynamically?
2) How to look up in range dimension in SSAS?
I'm new to SSAS, thx for any answers or tutorials!
Use the views that feed the Data Source Views to check the related fact data in your source system and filter the resulting dimesion list accordingly. If you do not have a set of views interfacing between your source and cube, you can perform the same within your queries directly, it is just not as clean.
I use this technique to limit long dimension lists to only used values, providing users that directly access the cube (with Excel etc) a precise list of used options within their filters / slicers. It does have the downside of masking possible options from users reports until they are consumed i.e. you will not see Cancelled Orders = 0 until the first cancelled order triggers of its creation.
I am assuming that you want the data to be dynamic, not the creation of Dimenions i.e. you know you require Dimension A, B, C and they relate to Fact Y and Z. If you truely want to dynamically create whole dimensions (dimension name, measure group relationships etc), I do not think this is possible.