How to set the "Supports Statistics" to True - arcgis

Just Like the Title said, I want to set the value of option "Supports Statistics" become "true" in the ArcGIS REST services Directory, All Layers and Tables

From the ArcGIS Resource Center (http://resources.arcgis.com/en/help/arcgis-rest-api/index.html#//02r3000000zr000000):
supportsStatistics would return false in the following scenarios:
The layer / table resides in a workspace other than an ArcSDE or File Geodatabase.
The layer / table is a Query Layer - a layer / table that is defined by a SQL query, e.g. a layer from an enterprise database without ArcSDE (not a Geodatabase), a Geodatabase archived layer, etc.
The layer / table has more than one join defined on it.
The layer / table is joined with another layer / table from a different workspace.
The layer / table has an "outer" join and where the workspace is a pre-10.1 Geodatabase and application server connection is used.
You didn't provide many details about your service, but I'd guess that your layer fits one of those five scenarios.

Related

What is the use of Table Delivery Class in SAP Data Dictionary?

I want to see the difference of Delivery Class 'A' and 'C'. C for data entered only by the customer, but how can I see it on the code?
I created two tables of type 'A' and 'C'. I add data with ABAP code. I thought I couldn't add data to the table I created with C, but they both work the same.
For A Type:
DATA wa_ogr LIKE ZSGT_DELIVCLS2.
wa_ogr-ogrenci_no = 1.
wa_ogr-ogrenci_adi = 'Seher'.
INSERT ZSGT_DELIVCLS2 FROM wa_ogr.
For C Type:
DATA wa_ogr LIKE ZSGT_DELIVERYCLS.
wa_ogr2-ogrenci_no = 1.
wa_ogr2-ogrenci_adi = 'Seher'.
INSERT ZSGT_DELIVERYCLS FROM wa_ogr2.
Datas get trouble-free when I check with debugging.
Is there a live demo where I can see the working logic of C? Can you describe Delivery Class C better?
Tables with delivery class C are not "customer" tables, they are "customizing" tables. "Customizing" is SAPspeak for configuration settings. They are supposed to contain system-wide or client-wide settings which are supposed to be set in the development system and then get transported into the production system using a customizing transport. But if that's actually the case or not depends on what setting you choose when generating a maintenance dialog with transaction SE54. It's possible to have customizing tables which are supposed to be set in the production system directly without a transport request.
Tables with delivery class A are supposed to contain application data. Data which is created and updated by applications as part of their every day routine business processes. There should usually be no reason to transport that data (although you can do that by manually adding the table name and keys to a transport request). Those applications can be SAP standard applications, customer-developed applications or both.
There is also the delivery class L which is supposed to be used for short-living temporary data as well as the classes G, E, S and W which should only be used by SAP on tables they created.
But from the perspective of an ABAP program, there is no difference between these settings. Any ABAP keywords which read or write database tables work the same way regardless of delivery class.
But there are some SAP standard tools which treat these tables differently. One important one are client copies:
Data in delivery class C tables will always be copied.
Data in delivery class A tables is only copied when desired (it's a setting in the copy profile). You switch it off to create an empty client with all the settings of an existing client or to synchronize customizing settings between two existing clients without overwriting any of the application data. You switch it on if you want to create a copy of your application data, for example if you want a backup or want to perform a destructive test on real data.
Data in delivery class L tables doesn't get copied.
For more information on delivery classes, check the documentation.

Why a well running view gets zero values in the native bounding box in Geoserver?

I am working on a remote db and I have all privileges as a user. I have created a spatial relational db that consists of 5 tables and one of them has the geometry column. When I'm trying to publish only the table with the geometry which has srid GGRS87, EPSG:2100, the native bbox is computing well but when I'm trying to create a view either from PostGIS or via Geoserver, the Native BBox gives values (-1,-1,0,0) and also the Lat/Lon BBox doesn't have the correct ones.The view in the db runs correctly, merging all the 5 tables.Lastly, I notice that when I create the view via Geoserver, the column of SRID doesn't show up to set it from there.
What could be possibly go wrong with the connection between PostGIS and Geoserver or it is sth else?
Thanks!
CREATE VIEW buildings AS
SELECT
id_owner,id_building,address_name,address_num,
region,x,y,closing_file
FROM owner
JOIN owner_property
ON owner.id_owner = owner_property.owner_id
JOIN building
ON property.building_id=building.id_building;
Your view seems to have no geometry, and consequently no SRS. You most likely forgot to insert it into your view or, as your screenshot suggests, the coordinate pairs are split into two columns - x and y. So just use ST_MakePoint with x and y in the query used to create the view ..
CREATE VIEW buildings_reinspection_file AS
SELECT
id_owner,id_building,address_name,address_num,
region,inspection_num,reinspection_num,reinspection_date,
approval_num,ownership_perc,building_assessm,color_tagged,
construction_type,ST_MakePoint(x,y,2100),closing_file
FROM owner
JOIN owner_property
ON owner.id_owner = owner_property.owner_id
JOIN property
ON owner_property.property_id = property.id_property
JOIN building
ON property.building_id=building.id_building
JOIN financial_assist
ON property.financial_assist_id=financial_assist.id_financial_assist;
.. and try to publish it again in GeoServer. If the column you created in the table building containing the geometry is called point, just replace the ST_MakePoint(x,y,2100) by building.point.

How to send data to only one Azure SQL DB Table from Azure Streaming Analytics?

Background
I have set up an IoT project using an Azure Event Hub and Azure Stream Analytics (ASA) based on tutorials from here and here. JSON formatted messages are sent from a wifi enabled device to the event hub using webhooks, which are then fed through an ASA query and stored in one of three Azure SQL databases based on the input stream they came from.
The device (Particle Photon) transmits 3 different messages with different payloads, for which there are 3 SQL tables defined for long term storage/analysis. The next step includes real-time alerts, and visualization through Power BI.
Here is a visual representation of the idea:
The ASA Query
SELECT
ParticleId,
TimePublished,
PH,
-- and other fields
INTO TpEnvStateOutputToSQL
FROM TpEnvStateInput
SELECT
ParticleId,
TimePublished,
EventCode,
-- and other fields
INTO TpEventsOutputToSQL
FROM TpEventsInput
SELECT
ParticleId,
TimePublished,
FreshWater,
-- and other fields
INTO TpConsLevelOutputToSQL
FROM TpConsLevelInput
Problem: For every message received, the data is pushed to all three tables in the database, and not only the output specified in the query. The table in which the data belongs gets populated with a new row as expected, while the two other tables get populated with NULLs for columns which no data existed for.
From the ASA Documentation it was my understanding that the INTO keyword would direct the output to the specified sink. But that does not seem to be the case, as the output from all three inputs get pushed to all sinks (all 3 SQL tables).
The test script I wrote for the Particle Photon will send one of each type of message with hardcoded fields, in the order: EnvState, Event, ConsLevels, each 15 seconds apart, repeating.
Here is an example of the output being sent to all tables, showing one column from each table:
Which was generated using this query (in Visual Studio):
SELECT
t1.TimePublished as t1_t2_t3_TimePublished,
t1.ParticleId as t1_t2_t3_ParticleID,
t1.PH as t1_PH,
t2.EventCode as t2_EventCode,
t3.FreshWater as t3_FreshWater
FROM dbo.EnvironmentState as t1, dbo.Event as t2, dbo.ConsumableLevel as t3
WHERE t1.TimePublished = t2.TimePublished AND t2.TimePublished = t3.TimePublished
For an input event of type TpEnvStateInput where the key 'PH' would exist (and not keys 'EventCode' or 'FreshWater', which belong to TpEventInput and TpConsLevelInput, respectively), an entry into only the EnvironmentState table is desired.
Question:
Is there a bug somewhere in the ASA query, or a misunderstanding on my part on how ASA should be used/setup?
I was hoping I would not have to define three separate Stream Analytics containers, as they tend to be rather pricey. After running through this tutorial, and leaving 4 ASA containers running for one day, I used up nearly $5 in Azure credits. At a projected $150/mo cost, there's just no way I could justify sticking with Azure.
ASA is supposed to be purposed for Complex Event Processing. You are using ASA in your queries to essentially pass data from the event hub to tables. It will be much cheaper if you actually host a simple "worker web app" to process the incoming events.
This blog post covers the best practices:
http://blogs.msdn.com/b/servicebus/archive/2015/01/16/event-processor-host-best-practices-part-1.aspx
ASA is great if you are doing some transformations, filters, light analytics on your input data in real-time. Furthermore, it also works great if you have some Azure Machine Learning models that are exposed as functions (currently in preview).
In your example, all three "select into" statements are reading from the same input source, and don't have any filter clauses, so all rows would be selected.
If you only want to rows select specific rows for each of the output, you have to specify a filter condition. For example, assuming you only want records with a non null value in column "PH" for the output "TpEnvStateOutputToSQL", then ASA query would look like below
SELECT
ParticleId,
TimePublished,
PH
-- and other fields INTO TpEnvStateOutputToSQL FROM TpEnvStateInput WHERE PH IS NOT NULL

SAP Flight reservation application

I am accessing flight reservation application built in SAP.
The application has a section on catering which contains: BC_MEAL, BC_MEALT, BC_STARTER, BC_MAINCOURSE, BC_DESSERT.
However, there are no such tables prefixed with BC_.
The tables are SMEAL, SMEALT, SSTARTER, SMACOURSE, SDESSERT instead.
Why is this discripency due to? How does SAP manage to convert application names into table names.
You're looking at the Data Modeler (SD11) and trying to compare it to the Data Dictionary / ABAP Dictionary (SE11). The actual table names are assigned to the entities explicitly:
expand BC_FLIGHT
double-click on BC_SFLIGHT
Button Dict. (?)
--> This screen should show the tables and/or views used to represent the entity.
It is worth noting that for many applications, no explicit data model exists (which is why I personally never bothered with the Data Modeler - a tool like this is virtually useless unless everyone else uses it as well).

Accommodating Dynamic Hierarchies in a Data Warehouse Model

I am building a data warehouse for the company's (which I am working for) core ERP application, for a particular client.
Most of the data in the source database, which is related to hierarchies in the data warehouse are in columns as shown below:
But traditionally the model to store dimension data according to my knowledge is as:
I could pivot the data and fit them in the model shown above. But the issue comes when a user introduces a new hierarchy value. Say for instance the user in the future decides to define a new level called Product Sub Category. Then my entire data warehouse model will collapse without a way to accommodate the new hierarchy level defined.
Do let me know a way to overcome this situation.
I hope my answer is clear enough. Just let me know if further details are needed.
Well, nothing should collapse -- the ETL should extract and load the data as always.
Here are a few options to consider:
Simply add one more column for the new hierarchy to the dimProduct.
Try using hierarchy helper table.
Consider adding path string attribute to the dimProduct.