Specifying a GeoTools read-only DataSource - datasource

Using a call such as:
DataStore dataStore = DataStoreFinder.getDataStore(map);
Is there an entry I can make to the map to make the datastore read-only? The only thing I have seen is the URL to specify the name for the datasource.
I imagine that the reason a map is used to send in arguments is that various data sources require different parameters. I am dealing with shape files right now and have not seen any way to specify it.
Thanks.

A DataStore doesn't have a notion of being read-only or read-write. On the other hand, the classes which access a feature type do; there is a difference between a FeatureSource and a FeatureStore. The former class does not have any write/update functions. A high-level description is here.
By default datastore.getFeatureSource returns its result cast as a FeatureSource (read-only). If you want to have write-access, you have to try and cast the FeatureSource to a FeatureStore. As a note, not all DataStore implementations provide write-access.

Related

Google App Engine: Is there any security concern with giving away datastore urlsafe entity keys in an API?

I want to give out anonymous IDs to certain entities in a public token-based API.
Is there any reason I shouldn't be using the urlsafe string of entity keys for that, since they are already anonymized (at least in my case, where I’m not using my own data to construct the key?
Google App Engine and the Datastore are considered safe as long as I'm not handing anyone the key, which I'm not, right?
Thank you.
One of their documentations says ....The urlsafe keyword parameter uses a websafe-base64-encoded serialized reference but it's best to think of it as just an opaque unique string.... I think this is what you're referring to when you say it is anonymized
But a subsequent documentation says ....The string representation of a key looks cryptic, but is not encrypted! It can be converted back to the raw key data, both kind and identifier. If you don't want to expose this data to your users (and allow them to easily guess other entities' keys), then encrypt these strings or use something else....
You can decode the key yourself via base64 - usually there is no risk in giving it away.
The huge risk is in taking an urlsafe entity keys as parameters and using them to read from the datastore. An attacker can trick your application in reading arbitrary data from your datastore project. This is to my knowledge nowhere documented.
So basically, any variant of this is a no-go in a web server:
def get(params):
data = datastore.get(urlsavedeccode(params.key))
return data
Any key supplied from the outside should be never used with the datastore since you can not be shure you are reading the kind / path you are expecting. This is basically the same scope of risk as SQL injection.

API Model Defintions

Typically when I am making API calls I am using javascript (ajax). JSON doesn't include value types of properties, and so everything is passed as a string.
Since I manage my own API I create request-able models that will tell you the definition of a model.
For example Id value type is int, StartDate value type is date.
I use the property types to automate form creation.
Is there a standard as to how to do this? My way works, but I'd prefer to be doing this by the book if it already exists.
OpenAPI is a standard you could follow. If you also make use of Swagger, it will allow you to produce a JSON schema which can be used in generating forms.
The hard part is typings are done at compilation and JS does that in browser.
You could use a typing model agent such as graphQL that adds a definition for those types ahead of time. Those definitions can then be dynamically fetched and enforced using typescript and a tool like apollo.
If you dont want to use typescript or graphql you could use something like mongoose schema and expose the schema on an endpoint then have your front end rebuild the schema dynamically to check types by casting when creating new objects.
Personally ive done this old fashion way by writing my own form schema and enforce the form types strictly on the front end by interpreting the fieldTypes
// returned from API somewhere
const fields = [{
type: 'input',
name: 'firstName'
rank: 0,
validation: '/^[a-zA-Z\s]+$/'
}]
Edit:
Found this great library that exports typed interfaces based on graphQL models.
https://github.com/avantcredit/gql2ts

FactoryImpl to set atts via props for bound inputs

First, thanks for any advice. I am new to all of this and apologize for any obvious blunders.
Second, the question:
In an interface for entering clients that often possess a number of roles, it seemed efficient to create a set of inputs which possessed both visual characteristics and associated data binding based simply on the inputs name.
For example, inquirerfirstname would be any caller or emailer who contacted our company.
The name would dictate a label, placeholder, and the location in firebase where the data would be stored.
The single name could be used--I thought--with a relational table (state machine or series of nested ifs) to define the properties of the input and change its outward appearance and inner bindings through property manipulation.
I created a set of nested iffs, and console logged the property changes in the inputs, but their representation in the host element (a collection of inputs that generated messages to clients as well as messages to sales staff) remained unaffected.
I attempted using the ready callback. I forced the state change with a button.
I was unable to use the var name = new MyInput( name). I believe using this method would be most effective but am unsure how to "stamp" the JavaScript into a heavyweight stamped parent element.
An example of a more complicated and dynamic use of a constructor and a factory implementation that can read database (J-son) objects and respond to generate HTML elements would be awesome.
In vanilla a for each would seem to do the trick but definitions and structure as well as binding would not be organic--read it might be easier just to HTML stamp the inputs in polymer by hand.
I would be really greatful for any help. I have looked for a week and failed to find one example that took data binding, physical appearance, attribute swapping, property binding and object reading into account.
I guess it's a lot, but each piece independently (save the use of the constructor) I think I get.
Thanks again.
Jason
Ps: I am aware that the stamping of the element seems to preclude dynamic property attribute and binding assignments. I was hoping a compute attribute mixed with a factoryimpl would be an option (With a nice example).

Resources accessible via the DataFactory Interface

Looking into org.pentaho.reporting.engine.classic.core.DataFactory and more specifically into the initialize method (which was formerly part of the ContextAwareDataFactory) I was wondering what resources/what part of the context is accessible via the interface, e.g. via the ResourceManager.
For instance, is it possible to get access to "resources" defined in a report, e.g. data sources or formulas (aside from the report parameters which are accessible via the query Method)? Thanks in advance!
The resource-manager allows you to access raw data stored in the zip/prpt file - but we do not allow you to access the parsed report or any of its (parsed) components.
With the resource-manager you can for instance load embedded xml- or other files and parse them as part of the query process.
If you were to do something extra nasty that requires access to the report definition and its content, then you could gain access via a wild hack using subreports:
Create a new report-function (via code). In that function, override the
"reportInitialized" method to get the report instance
("event.getState().getReportDefinition()"). Store that object in the
function and return it via the "getValue()" method of your function.
Pass that function's result as parameter to a subreport.
The subreport's data-factories can now access the parameter,
which is the report object returned by the master-report's function.
This process is intentionally complex and not fun. We are strongly against using the report in the process of querying data.
P.S: If you intent to access a SQL/MQL/MDX datasource from a scriptable datasource, then simply use the script-extensions that are built into these datasources since PRD-3.9.
http://www.sherito.org/2011/11/pentaho-reportings-metadata-datasources.html

Fetched Properties, cross store relationships

I've got a store that is synchronized externally and a store that is unique to the application instance, so in order to cleanly differentiate the two I want to have some join entities between them and then resolve through to the entities between using Fetched Properties, as "discussed" in the Core Data Programming Guide:
developer.apple.com/documentation/Cocoa/Conceptual/CoreData/Articles/cdRelationships.html#//apple_ref/doc/uid/TP40001857-SW5
I think I just don't really "get" how Fetched Properties are supposed to be used - and I've spent a fair number of hours looking for examples with no real luck.
The way I think of it is,
I have the following Entities each in a different store
Foo with attribute relatedBarName in store A
Bar with attribute barName in store B
I need to create a fetched property on Foo named findRelatedBar that relates Foo to Bar loosely through barName = relatedBarName.
However, I don't understand how since Foo and Bar are in different stores how to declare any relationship of any sort, whether through the fetched property or not, from Foo to Bar?
The predicate builder in XCode seems to want a Destination entity. If they are in different schemas, how can you declare the destination? If you don't declare a destination, how do you at runtime indicate that findRelatedBar on Foo is describing Bar?
Otherwise, do they need to be in the same schema but just stored in different stores?
In crafting this question, I thought of these questions and answered them myself by more focused examination of the documentation. I assume if I found it confusing, others might as well, so I'll inline them with this post to make it easier to find related answers to fetched properties / core data stores.
Q) If a store coordinator have more than one store associated with it of the same schema, how do insertions know which store to insert to?
A) You use the assignObject:toPersistentStore: method on the managed object context.
Q) What does FETCH_SOURCE refer to in specific?
A) It's simply the managed object which has the fetched property associated with it. Sort of like "self"
Q) What does FETCHED_PROPERTY refer to in specific?
A) It is a reference to the fetched property description instance you are using to query with - you can use this to insert per query variable substitution. By setting a property (as in the Core Data Programming example) on the userInfo of the property description instance you're using, you can inject that value into the expression.
Thanks!!!!
The answer is:
Yes, you need to do a cross store fetched property with shared schemas. If you do this, you need to make sure you attribute the inserts with the assignObject:: method as described in the question. However, due to the limitations of the SQLLITE persistent store, natural things like IN $FETCH_SOURCE.attribute do not work.
Q) If a store coordinator have more
than one store associated with it of
the same schema, how do insertions
know which store to insert to?
This is what configurations are for. You create a configuration for each store and then assign entities to that configuration. You then create the store with the proper configuration. When you save the context, each entity will automatically go to the correct store.