We have a requirement where we need to give access to a particular user group in a bigquery dataset that contains views created by java code. I found that datasets.patch method can help me do it but not able to find documentation of what needs to be passed in the http request.
You can find the complete documentation on how to update BigQuery dataset access controls in the documentation page linked. Given that you are already creating the views in your dataset programatically, I would advise that you use the BigQuery client library, which may be more convenient than performing the API call to the datasets.patch method. In any case, if you are still interested in calling the API directly, you should provide the relevant portions of a dataset resource in the body of the request.
The first link I shared provides a good example of updating dataset access using the Java client libraries, but in short, this is what you should do:
public List<Acl> updateDatasetAccess(DatasetInfo dataset) {
// Make a copy of the ACLs in order to modify them (adding the required group)
List<Acl> previousACLs = dataset.getAcl();
ArrayList<Acl> ACLs = new ArrayList<>(previousACLs);
ACLs.add(Acl.of(new Acl.User("your_group#gmail.com"), Acl.Role.READER));
DatasetInfo.Builder builder = dataset.toBuilder();
builder.setAcl(ACLs);
bigquery.update(builder.build());
}
EDIT:
The way to define the dataset object is the following one:
BigQuery bigquery = BigQueryOptions.getDefaultInstance().getService();
Dataset dataset = bigquery.getDataset(DatasetId.of("YOUR_DATASET_NAME"));
Take into account that if you do not specify credentials when constructing the client object bigquery, the client library will look for credentials in the GOOGLE_APPLICATION_CREDENTIALS environment variable.
Related
I'm using a second datastore with my Ember app, so I can communicate with a separate external API. I have no control over this API.
With a DS.JSONSerializer I can add some missing properties like id:
normalizeResponse(store, primaryModelClass, payload, id, requestType) {
if (requestType == 'query') {
payload.forEach(function(el, index) {
payload[index].id = index
})
}
Now I can do some different tricks for each different requestType. But every response is parsed. Now sometimes a response from one request needs to be parsed differently.
So what I am trying to do is change the normalizeResponse functionality for each different request path (mapped to a fake model using pathForType in an adapter for this store). But the argument store is always the same (obviously) and the argument promaryModelClass is always "unknown mixin" - not sure if this can be any help.
How can I find what model was requested? With this information I could do a switch() in normalizeResponse.
Is there a different way to achieve my goal that does not require me to make a separate adapter for every path/model?
There are over a dozen normalize functions available. Something should work for what I am trying to achieve.
I think this is a great example of a use case of not using ember data.
Assuming that you have models A,B,C that are all working great with ember data, leave those alone.
I'd create a separate service and make raw requests to that different endpoint. So you'd replace this.store.query('thing', {args}) with a separate service that uses ember-ajax (or ember-fetch or whatever). If you need, you can use that service to hold the data that you need (Ember-data is just a service anyway) or you can create models and push them into the store manually.
Without knowing more about your exact situation, hard to give a specific code/advice, but I'd just avoid this problem and write your own custom service.
You can use primaryModelClass.modelName.
I'm currently using a self hosted Parse Server up to date but I'm facing some security issues.
At the moment, calls done to the route /classes can retrieve any object in any table and, even though I might want an object to be public readable, I wouldn't like to show all the parameters of that object. Briefly I don't want the database to be retrieved in any case, I would like to disable "everything" except the Parse Cloud code. So that is, I would be able to run calls to my own functions, but not able to use clients (Android, iOS, C#, Javascript...) to retrieve data.
Is there any way to do this? I've been searching deeply for this, trying to debug some Controllers but I don't have any clue.
Thank you very much in advance.
tl;dr: set the ACL for all objects to be only readable when using the master key and then tell the query in Cloud Code to use the MK when querying your data
So without changing Parse Server itself you could make use of ACL and only allow a specific user to access objects. You would then "login" as that user in your Cloud Code and be able to access all objects.
As the old method, Parse.Cloud.useMasterKey() isn't available in the OS Parse Server you will have to pass the parameter useMasterKey to the query you are running which should do the trick for this particular request and will bypass ACL/CLP. There is an example in the Wiki of Parse Server as well.
For convenience, here is a short code example from the Wiki:
Parse.Cloud.define('getTotalMessageCount', function(request, response) {
var query = new Parse.Query('Messages');
query.count({
useMasterKey: true
}) // count() will use the master key to bypass ACLs
.then(function(count) {
response.success(count);
});
});
I have some scenarios written in Jbehave and I would like to run it for 1000+ data. The Problem is that I cannot list all data items in 'Examples' because, firstly, it is not maintainable and secondly, I get this data file everyday from an external service.
Is there a way to write a scenario that can take data from the file?
Parameters can be loaded from an external file,
Details with an example are here: http://jbehave.org/reference/stable/parametrised-scenarios.html
Loading parameters from an external resource
The parameters table can also be loaded from an external resource, be
it a classpath resource or a URL.
Given a stock of <symbol> and a <threshold>
When the stock is traded at <price>
Then the alert status should be <status>
Examples:
org/jbehave/examples/trader/stories/trades.table
We need to enable the parser to find the resource with the appropriate
resource loader configured via the ExamplesTableFactory:
new MostUsefulConfiguration()
.useStoryParser(new RegexStoryParser(
new ExamplesTableFactory(new LoadFromClasspath(this.getClass())))
)
I too have same requirement and I think below will be the possible solution.
Implement a method to read the excel sheet and prepare the testData.table before scenario start executes, use #BeforeScenario jbehave annotation in steps java file.
refer this link to implement loading data from external resource http://jbehave.org/reference/stable/parametrised-scenarios.html
#BeforeScenario
public void prepareTestData(String excelSheetPath) {
// java code to read given excelSheetPath and prepare a *.table
}
currently the JSONStore API provides a load() method that says in the documentation:
This function always stores whatever it gets back from the adapter. If
the data exists, it is duplicated in the collection". This means that
if you want to avoid duplicates by calling load() on an already
populated collection, you need to empty or drop the collection before.
But if you want to be able to keep the elements you already have in
the collection in case there is no more connectivity and your
application goes for offline mode, you also need to keep track of
these existing elements.
Since the API doesn't provide a "overwrite" option that would replace the existing elements in case the call to the adapter succeeds, I'm wondering what kind of logic should be put in place in order to manage both offline availability of data and capability to refresh at any time? It is not that obvious to manage all the failure cases by nesting the JS code due to the promises...
Thanks for your advices!
One approach to achieve this:
Use enhance to create your own load method (i.e. loadAndOverwrite). You should have access to the all the variables kept inside an JSONStore instance (collection name, adapter name, adapter load procedure name, etc. -- you will probably use those variables in the invokeProcedure step below).
Call push to make sure there are no local changes.
Call invokeProcedure to get data, all the variables you need should be provided in the context of enhance.
Find if the document already exists and then remove it. Use {push: false} so JSONStore won't track that change.
Use add to add the new/updated document. Use {push: false} so JSONStore won't track that change.
Alternatively, if the document exists you can use replace to update it.
Alternatively, you can use removeCollection and call load again to refresh the data.
There's an example that shows how to use all those API calls here.
Regarding promises, read this from InfoCenter and this from HTML5Rocks. Google can provide more information.
Looking into org.pentaho.reporting.engine.classic.core.DataFactory and more specifically into the initialize method (which was formerly part of the ContextAwareDataFactory) I was wondering what resources/what part of the context is accessible via the interface, e.g. via the ResourceManager.
For instance, is it possible to get access to "resources" defined in a report, e.g. data sources or formulas (aside from the report parameters which are accessible via the query Method)? Thanks in advance!
The resource-manager allows you to access raw data stored in the zip/prpt file - but we do not allow you to access the parsed report or any of its (parsed) components.
With the resource-manager you can for instance load embedded xml- or other files and parse them as part of the query process.
If you were to do something extra nasty that requires access to the report definition and its content, then you could gain access via a wild hack using subreports:
Create a new report-function (via code). In that function, override the
"reportInitialized" method to get the report instance
("event.getState().getReportDefinition()"). Store that object in the
function and return it via the "getValue()" method of your function.
Pass that function's result as parameter to a subreport.
The subreport's data-factories can now access the parameter,
which is the report object returned by the master-report's function.
This process is intentionally complex and not fun. We are strongly against using the report in the process of querying data.
P.S: If you intent to access a SQL/MQL/MDX datasource from a scriptable datasource, then simply use the script-extensions that are built into these datasources since PRD-3.9.
http://www.sherito.org/2011/11/pentaho-reportings-metadata-datasources.html