SAP Cloud Foundry Trial and hanatrial Service HDI-Shared Plan - hana

I'm using a trial account in SAP Cloud Platform and created an instance of the "hanatrial" service, witch is up and I can even query the public table "tables".
The service provide me, in the VCAP_SERVICES, two users: user and hdi_user.
Both appears to have the same privileges in the schema provided, and cannot create any data in any public table table nor create any tables inside the provided schema.
My question is: is it possible to create tables or a single table with the "hanatrial" service? I have been two days searching for an answer in the SAP documentation and in the Internet without having found an evidence that it is nor that it is not.

For those of you whom still wonder apparently Hanatrial has no suport for creating tables.

Related

How to query AAD Security Group Membership from Azure SQL

I'm trying to find a way from within Azure SQL to either 1) enumerate members of an Azure Active Directory security group or 2) check if a user login is a member of an SG. I've found various articles about doing it from a domain joined standalone SQL installation but not from Azure SQL. Most of the samples for the standalone installation use system sprocs like xp_cmdshell which don't exist in Azure SQL. I know I can create an Azure Function or Logic App to sync users to a table but I'd like to avoid using an external process to do this if possible.
#Kalyan Chanumolu-MSFT's comment should be very helpful to you. This scenario is not supported today.
You can try to use his suggestion.
You will have to talk to Microsoft Graph API from an intermediate like an Azure function to relay the data to Azure SQL Database.
You also can raise a support ticket to confirm it and also can put forward your suggestions in the feedback.

Is maprdb a document store or a key value store

I know that mapr db exposes OJAI interface to query using json documents and hbase api's using which we can create column families.
I want to know how does mapr db store data on file system, and what kind of queries are costly and what kind of queries are cheap (for which maprdb is really made for).
It would be really helpful if some one can give some guidelines on what basis should we create column families if we are using hbase api's to access maprdb.
Please share any references that can answer this question

How do I connect a BigQuery database based on a Google Sheet to Looker?

I'm attempting to connect BigQuery to Looker. I am pulling sample data from a Google Sheets document to a BigQuery dataset; this part is working fine, as my internal BigQuery queries are running just fine for this dataset. Using this documentation from the Looker forums, I tried to create a service account key to connect my BigQuery dataset to Looker. Unfortunately, the documentation is slightly out of date: Google now asks which service account (compute engine default service account, app engine default service account, or a new service account that can have any of multiple roles) you want to attach the key to.
Thus far, I have tried using P12 keys created for the compute engine default service account, the app engine default service account, as well as a new Project Owner service account. When I create the connection in Looker, the admin page confirms that the connection "can connect, can cancel queries, can run simple select query" (I need it to do more complex things, but am just trying to connect at all right now). Using the SQL Runner to test a simple select 10 query out, I was able to query the public datasets, e.g. hacker_news or usa_names. However, whenever I tried to run the same query on my personal sample dataset, I received this error:
Failed to retrieve data - The job encountered an internal error during execution and was unable to complete successfully.
The permissions for the base Google Sheet that the BigQuery project is pulling from are set to be viewable by my coworkers who have the link. I have also been adding each service account I test as an editor (which I assume has the highest permissions). At this point, I am creating new service accounts with each of the different possible roles to see if it's a permissions issue from the role perspective. Nothing has worked so far, so any insight would be helpful!
UPDATE: I have created a new table within the same BigQuery dataset. The new table was created using a CSV file, which was simply a download of my previous table in Google Sheets. I updated the connection to Looker. When I wrote a select 10 query pulling from the new table, it worked fine and ran very quickly. This seems to imply that the problem is something about the permissions between Google Sheets and Google BigQuery.
I've been wanting to do something like this myself for a bit, saw this question, and decided to dig in.
First thing I found was this "documentation" over in the looker discourse:
https://discourse.looker.com/t/live-spreadsheets-in-databases/2698/7
In there, it describes the steps necessary to get this working.
Two important things that you are probably missing, based on your description of events so far (since it sounds like you've already attached the sheet to your dataset and are able to query it from the BigQuery UI):
Make sure you share the Google Sheet with the service account you are using to connect Looker to BigQuery. This is the Username from the Connections tab of the Admin page in Looker.
Make sure you have enabled the Drive and Sheets APIs for your google project. You can do that via The API Library. Just search for "Drive" (or "Sheets"), click on the name, and then click on the "Enable" button from the API detail page.
Once I did the above, I had to wait a few minutes before things started working. I'll go out on a limb and guess that this was because Looker needed to cycle it's internal connection pool before the permissions would reset and work. So you may need to run a few failing queries, or wait out the connection pool before this will go into effect.
Hope that helps.

My azure storage unique namespace

I'm trying to move some tables from SQL to Azure Table Storage.
I created an MVC Website with the default authentication. I successfully connected it to my Azure SQL database. Now I want to use the table storage for authentication too, instead of the SQL database.
The problem is, I cannot find my storage account's unique namespace. What, where is that namespace?
Thanks!
Looking at a table URL, for example 'http://myaccountname.blob.core.windows.net/mytable', the 'myaccountname' will be the name of your account. Storage account names must be between 3 and 24 characters in length and may contain numbers and lowercase letters only. The storage account name must be unique on the Azure service. A list of storage accounts your own and more information about them can be found in the Azure Portal.
More information on authentication for tables can be found here and here. Manipulating and authenticating access to your tables are features built into the storage client libraries which are available in a variety of languages. Since you mention MVC, you might want to check out the .Net storage library.

How to get/set information in an SQL database via internet connection

I'm currently creating an SQL database that will eventually be accessed by web applications, both mobile and desktop platforms.
Setting up the SQL database is easy.
What I'm currently having trouble with is how I'm going to allow clients to access to information in the database without having direct access to the database, for security purposes.
For example, let's say I have a list of employees, and I'm creating employee profiles for a mobile application. I would like for the mobile application to be able to use a function such as get(employee_name), which will call the database and retrieve the information about the employee in the form of a JSON.
How would I go about doing this?
To see a very nice solution on using JSON on sql server tables please look at Phil Factors article Consuming JSON Strings in SQL Server at:
https://www.simple-talk.com/sql/t-sql-programming/consuming-json-strings-in-sql-server/