google cloud dataflow cross project access for big table - google-bigquery

I want to run dataflow job to migrate data from google-project-1-table to google-project-2-table. (Read from one and write to another). I am getting permission issue while doing that. I have set "GOOGLE_APPLICATION_CREDENTIALS" to point to my credential file for project-1. In project-2 below are the permissions/roles for project-1. 1) service-account (role - Editor) 2) -compute#developer.gserviceaccount.com (role - Editor) 3) #cloudservices.gserviceaccount.com(role - Editor).
Is there anything else I need to do to run the job?
Caused by: com.google.bigtable.repackaged.com.google.cloud.grpc.io.IOExceptionWithStatus: Error in response stream
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.ResultQueueEntry$ExceptionResultQueueEntry.getResponseOrThrow(ResultQueueEntry.java:66)
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.ResponseQueueReader.getNextMergedRow(ResponseQueueReader.java:55)
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.StreamingBigtableResultScanner.next(StreamingBigtableResultScanner.java:42)
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.StreamingBigtableResultScanner.next(StreamingBigtableResultScanner.java:27)
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.ResumingStreamingResultScanner.next(ResumingStreamingResultScanner.java:89)
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.ResumingStreamingResultScanner.next(ResumingStreamingResultScanner.java:45)
at com.google.cloud.bigtable.dataflow.CloudBigtableIO$1.next(CloudBigtableIO.java:221)
at com.google.cloud.bigtable.dataflow.CloudBigtableIO$1.next(CloudBigtableIO.java:216)
at com.google.cloud.bigtable.dataflow.CloudBigtableIO$Reader.advance(CloudBigtableIO.java:775)
at com.google.cloud.bigtable.dataflow.CloudBigtableIO$Reader.start(CloudBigtableIO.java:799)
at com.google.cloud.dataflow.sdk.io.Read$Bounded$1.evaluateReadHelper(Read.java:178)
... 18 more
Caused by: com.google.bigtable.repackaged.io.grpc.StatusRuntimeException: PERMISSION_DENIED: User can't access project: project-2
at com.google.bigtable.repackaged.io.grpc.Status.asRuntimeException(Status.java:431)
at com.google.bigtable.repackaged.com.google.cloud.grpc.scanner.StreamObserverAdapter.onClose(StreamObserverAdapter.java:48)
at com.google.bigtable.repackaged.io.grpc.internal.ClientCallImpl$ClientStreamListenerImpl$3.runInContext(ClientCallImpl.java:462)
at com.google.bigtable.repackaged.io.grpc.internal.ContextRunnable.run(ContextRunnable.java:54)
at com.google.bigtable.repackaged.io.grpc.internal.SerializingExecutor$TaskRunner.run(SerializingExecutor.java:154)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
... 1 more

There are some instructions for this in the section "Accessing Cloud Platform Resources Across Multiple Cloud Platform Projects" of the Dataflow Security and Permissions guide.
Since that guide does not explicitly address Cloud BigTable, I will try to write up the requirements clearly here in terms of your question.
Using fake project id numbers, it seems you have:
A project project-1 with id 12345.
A project project-2 with id 9876
A Bigtable google-project-1-table in project-1
A Bigtable google-project-2-table in project-2
A Dataflow pipeline that will run in project-1, which you want to:
read from google-project-1-table
write to google-project-2-table
Is that accurate?
Your Dataflow workers that write to Bigtable run as the compute engine service account. That is 12345-compute#developer.gserviceaccount.com. This account will need to be able to access project-2 and write to google-project-2-table.
Your error message implies that the permissions failure occurs at the coarsest granularity - the account cannot access project-2 at all.

Related

Am I able to view a list of devices by partition in iothub?

I have 2 nodes of a cluster receiving messages from iothub. I split their responsibility by partition. Node 1 reads from partitions 1,3,5,7,9 and the other 2,4,6,8, and 0. Recently, my partition 8 stops responding until I stop my code and restart it. It seems like a device is sending a message that locks up the partition. What I want to do is list all devices in my partition 8. Is that possible? Is there a cloud shell command to get those devices in a list?
Not sure this will help you, but you can see the partition on the incoming messages. For example you could use Azure Stream Analytics to see the partitions using this query:
Select GetMetadataPropertyValue(IoTHub, '[IoTHub].[ConnectionDeviceId]') as DeviceId, partitionId
from IoTHub
Also, if you run locally in VisualStudio it will tell you which device is sending malformed JSON. eg.
[Warning] 10/21/2021 9:12:54 AM : User Warning Source 'IoTHub' had 1 occurrences of kind 'InputDeserializerError.InvalidData' between processing times '2021-10-21T15:12:50.5076449Z' and '2021-10-21T15:12:50.5712076Z'. Could not deserialize the input event(s) from resource 'Partition: [1], Offset: [455266583232], SequenceNumber: [634800], DeviceId: [DeviceName]' as Json. Some possible reasons: 1) Malformed events 2) Input source configured with incorrect serialization format
Also check your "Activity Log" blade in the ASA job. It may have more details for you.

Presto cannot access the web page while using SSL

my presto version is 0.240
my operation: i want to use ssl for use https in presto
so i change my config refer only by this url: https://trino.io/docs/current/security/internal-communication.html
but i can't Access to the presto address https://192.168.100.142:9999/
I don't know which step I did wrong.
What should I do to implement HTTPS for Presto?
this is my config:
A cluster of two machines
node 1 142 hostname:sbider-dev-01
/opt/presto-server-0.240/etc/config.properties
coordinator=true
node-scheduler.include-coordinator=true
query.max-memory=7.5GB
query.max-memory-per-node=3.5GB
query.max-total-memory-per-node=3.5GB
experimental.reserved-pool-enabled=false
memory.heap-headroom-per-node=0.5GB
#experimental.spill-enabled=true
#experimental.max-spill-per-node=8GB
#experimental.query-max-spill-per-node=8GB
query.low-memory-killer.policy=total-reservation-on-blocked-nodes
#http-server.http.port=9999
#discovery-server.enabled=true
#discovery.uri=http://192.168.100.142:9999
internal-communication.shared-secret="8HRJWX41DwtuYZcNw8uMbshA8wDLoLS78tT3UVL+Z+m0xG7KCygGurE9SXEbGy2bLtPLza1MhAnWJp2mJp/S+j9EFWWuztXz7cHJhSz9QFiVxYCs1Wzn+IVKgHD5z+iGbdKjwRtgUjwNvS4MIfqwqwKlVZiEtGgEDv7j/kAgpOYPvFCRJfb/U/+b7qPpwPNDA6kXu3Dj5p1Q81+kmbFO59WSh6c4QwqdbFHAaY8XFWo8tIogxpmwQQqV3BvICmesxlIhBH/pOGgoyl86QQ/TaAMaWjaddNcgO5keTGhhOj/juGZ/gbOL/PHGNs1ENSPRnjvIGLHFQPDrm36YenhfTH5L7X0Q9HwwnEpEoYkDJsmMEV+elPZK767nZXHryuvDvHGs0PhYSRO8ekOgC3CaE1tfiGh5M9H5C2fnyeGRQ0iwtgXh83kRDuPzVrRx5yj2cHQJOZu+CcXCJ3aa1Tijxq56RfdcEz9Frr8n8aXaNMtRlchcXn3+B4biByS9duq28VHHBDlyYQQ6VSKbLDt1GBi5oOQICtrGuOY+/MD+rnV5uxPUQcSIh9KmA1WjahJEz0ItDKpB66JgVkTrVDWEJPeozKTvHRLG9sBudRhQ5abJGEAhx9b78dUbTcEkRlPuvUN1WjwVlUzjyUDKd14ocuhpoOBzjV9kFhTqQZ4zgNo="
http-server.http.enabled=false
#node.internal-address-source=FQDN
node.internal-address=sbider-dev-01,sbider-dev-02
http-server.https.enabled=true
http-server.https.port=9999
# jks文件全路径
http-server.https.keystore.path=/ceshi/keystore.jks
http-server.https.keystore.key=123456
discovery.uri=https://192.168.100.142:9999
internal-communication.https.required=true
internal-communication.https.keystore.path=/ceshi/keystore.jks
internal-communication.https.keystore.key=123456
node 2 143 hostname cat /opt/presto-server-0.240/etc/config.properties
coordinator=flase
query.max-memory=7.5GB
query.max-memory-per-node=3.5GB
query.max-total-memory-per-node=3.5GB
experimental.reserved-pool-enabled=false
memory.heap-headroom-per-node=0.5GB
#experimental.spill-enabled=true
#experimental.max-spill-per-node=8GB
#experimental.query-max-spill-per-node=8GB
query.low-memory-killer.policy=total-reservation-on-blocked-nodes
#discovery.uri=http://192.168.100.142:9999
internal-communication.shared-secret="8HRJWX41DwtuYZcNw8uMbshA8wDLoLS78tT3UVL+Z+m0xG7KCygGurE9SXEbGy2bLtPLza1MhAnWJp2mJp/S+j9EFWWuztXz7cHJhSz9QFiVxYCs1Wzn+IVKgHD5z+iGbdKjwRtgUjwNvS4MIfqwqwKlVZiEtGgEDv7j/kAgpOYPvFCRJfb/U/+b7qPpwPNDA6kXu3Dj5p1Q81+kmbFO59WSh6c4QwqdbFHAaY8XFWo8tIogxpmwQQqV3BvICmesxlIhBH/pOGgoyl86QQ/TaAMaWjaddNcgO5keTGhhOj/juGZ/gbOL/PHGNs1ENSPRnjvIGLHFQPDrm36YenhfTH5L7X0Q9HwwnEpEoYkDJsmMEV+elPZK767nZXHryuvDvHGs0PhYSRO8ekOgC3CaE1tfiGh5M9H5C2fnyeGRQ0iwtgXh83kRDuPzVrRx5yj2cHQJOZu+CcXCJ3aa1Tijxq56RfdcEz9Frr8n8aXaNMtRlchcXn3+B4biByS9duq28VHHBDlyYQQ6VSKbLDt1GBi5oOQICtrGuOY+/MD+rnV5uxPUQcSIh9KmA1WjahJEz0ItDKpB66JgVkTrVDWEJPeozKTvHRLG9sBudRhQ5abJGEAhx9b78dUbTcEkRlPuvUN1WjwVlUzjyUDKd14ocuhpoOBzjV9kFhTqQZ4zgNo="
http-server.http.enabled=false
#node.internal-address-source=FQDN
node.internal-address=sbider-dev-01,sbider-dev-02
http-server.https.enabled=true
http-server.https.port=9999
http-server.https.keystore.path=/ceshi/keystore.jks
http-server.https.keystore.key=123456
discovery.uri=https://192.168.100.142:9999
internal-communication.https.required=true
internal-communication.https.keystore.path=/ceshi/keystore.jks
internal-communication.https.keystore.key=123456
server log in sbider-dev-01: cat /opt/presto-server-0.240/var/log/server.log
Companion catalogs: catalog_name1=catalog_name2,catalog_name3=catalog_name4,...
2021-01-12T12:41:09.766+0800 INFO main Bootstrap transaction.idle-check-interval 1.00m 1.00m Time interval between idle transactions checks
2021-01-12T12:41:09.766+0800 INFO main Bootstrap transaction.idle-timeout 5.00m 5.00m Amount of time before an inactive transaction is considered expired
2021-01-12T12:41:09.767+0800 INFO main Bootstrap transaction.max-finishing-concurrency 1 1 Maximum parallelism for committing or aborting a transaction
2021-01-12T12:41:09.767+0800 WARN main Bootstrap UNUSED PROPERTIES
2021-01-12T12:41:09.767+0800 WARN main Bootstrap internal-communication.shared-secret
2021-01-12T12:41:09.767+0800 WARN main Bootstrap
2021-01-12T12:41:11.037+0800 ERROR main com.facebook.presto.server.PrestoServer Unable to create injector, see the following errors:
1) Configuration property 'internal-communication.shared-secret' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:238)
1 error
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) Configuration property 'internal-communication.shared-secret' was not used
at com.facebook.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:238)
1 error
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:543)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:159)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:106)
at com.google.inject.Guice.createInjector(Guice.java:87)
at com.facebook.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:245)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:131)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:77)
You're following Trino (fka Presto SQL) documentation for securing internal documentation, but got Presto binary from facebook's fork of the project (prestodb).
Go to https://trino.io/download.html to get latest Trino release.
The alternative solution (using prestodb's documentation and prestodb's binary) is NOT a safe, viable alternative, due to security issues known and not fixed in prestodb code base.

SAP HANA How to debug / fix "insufficient privilege" Error

In SAP HANA I am trying to call a StoredProcedure with a Table Type as input parameter.
Other Input parameters work just fine. But as soon as I use a Table Type I get the error:
Failed to execute action: InternalError: dberror($.hdb.Connection.executeProcedure): 258 - SQL error, server error code: 258. insufficient privilege: Not authorized at /sapmnt/ld7272/a/HDB/jenkins_prod/workspace/8uyiojyvla/s/ptime/query/checker/query_check.cc:4003
How to fix / debug this?
In the indexserver-trace is:
[19984]{315590}[100/100235487] 2018-08-22 10:07:13.949679 i TraceContext TraceContext.cpp(01028) : UserName=SAPDBCTRL, ApplicationUserName=SM_EFWK, ApplicationName=ABAP:AS2, ApplicationSource=CL_SQL_STATEMENT==============CP:304, Client=010, StatementHash=31c1e1f5ca72868a541d58fc5a77596b, EppRootContextId=0050560204981EE782C14A33A16BC68E, EppTransactionId=47BF1E2CEE9D05A0E005B7CF04FCF981, EppConnectionId=5B7C13CC22061B08E10000000A1807AF, EppConnectionCounter=1, EppComponentName=AS2/sapas2ci_AS2_01, EppAction=EFWK RESOURCE MANAGER
[19984]{315590}[100/100235487] 2018-08-22 10:07:13.949656 w SQLScriptExecuto se_eapi_proxy.cc(00144) : Error <exception 71000258: Not authorized
> in preparation of internal statement: delete from _SYS_STATISTICS.STATISTICS_PROPERTIES where key='internal.check.store_results'
[19984]{315590}[100/100235487] 2018-08-22 10:07:13.949904 e SQLScript trex_llvm.cc(00936) : Llang Runtime Error: Exception::SQLException258: insufficient privilege: Not authorized
at main (line 63) ("_SYS_STATISTICS"."SHARED_STORE_USED_VALUES": line 8 col 5 (at pos 456))
This seems rather straightforward:
The application user (the person using SAP NetWeaver) SM_EFWK logged on in client 010 is trying to delete data from an SAP HANA statistics service table _SYS_STATISTICS.STATISTICS_PROPERTIES.
The NetWeaver/ABAP program uses a secondary database connection with the database user SAPDBCTRL.
The error Exception::SQLException258: insufficient privilege: Not authorized is thrown, because this SAPDBCTRL database user, does not have the privilege to DELETE on this table assigned to it (neither directly, nor via schema or role privilege).
If the SQL command is part of an SAP standard program, then I'd check that the recommended setup has been implemented correctly.
If this command comes from a custom program, you may want to either assign the privilege or use a different technical user as SAPDBCTRL is an SAP standard user that shouldn't be modified.

BigQuery loads manually but not through the Java SDK

I have a Dataflow pipeline, running locally. The objective is to read a JSON file using TEXTIO, make sessions and load it into BigQuery. Given the structure I have to create a temp directory in GCS and then load it into BigQuery using that. Previously I had a data schema error that prevented me to load the data, see here. That issue is resolved.
So now when I run the pipeline locally it ends with dumping a temporary JSON newline delimited file into GCS. The SDK then gives me the following:
Starting BigQuery load job beam_job_xxxx_00001-1: try 1/3
INFO [main] (BigQueryIO.java:2191) - BigQuery load job failed: beam_job_xxxx_00001-1
...
Exception in thread "main" com.google.cloud.dataflow.sdk.Pipeline$PipelineExecutionException: java.lang.RuntimeException: Failed to create the load job beam_job_xxxx_00001, reached max retries: 3
at com.google.cloud.dataflow.sdk.Pipeline.run(Pipeline.java:187)
at pedesys.Dataflow.main(Dataflow.java:148)
Caused by: java.lang.RuntimeException: Failed to create the load job beam_job_xxxx_00001, reached max retries: 3
at com.google.cloud.dataflow.sdk.io.BigQueryIO$Write$WriteTables.load(BigQueryIO.java:2198)
at com.google.cloud.dataflow.sdk.io.BigQueryIO$Write$WriteTables.processElement(BigQueryIO.java:2146)
The errors are not very descriptive and the data is still not loaded in BigQuery. What is puzzling is that if I go to the BigQuery UI and load the same temporary file from GCS that was dumped by the SDK's Dataflow pipeline manually, in the same table, it works beautifully.
The relevant code parts are as follows:
PipelineOptions options = PipelineOptionsFactory.create();
options.as(BigQueryOptions.class)
.setTempLocation("gs://test/temp");
Pipeline p = Pipeline.create(options)
...
...
session_windowed_items.apply(ParDo.of(new FormatAsTableRowFn()))
.apply(BigQueryIO.Write
.named("loadJob")
.to("myproject:db.table")
.withSchema(schema)
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
);
The SDK is swallowing the error/exception and not reporting it to users. It's most likely a schema problem. To get the actual error that is happening you need to fetch the job details by either:
CLI - bq show -j job beam_job_<xxxx>_00001-1
Browser/Web: use "try it" at the bottom of the page here.
#jkff has raised an issue here to improve the error reporting.

Problems deploying Grails application on cloudbees

I'm new cloudbees user, I've succeeded in building my war file but I can't deploy it because when a I'm going to use Deploy Now option, putting my application ID claxon/claxon I always get the same error:
**"hudson.util.IOException2: Server.InternalError - Expected Application ID format: account/appname > claxon/"**
this is the log..
hudson.util.IOException2: Server.InternalError - Expected Application ID format: account/appname > claxon/
at com.cloudbees.plugins.deployer.deployables.Deployable.deployFile(Deployable.java:152)
at com.cloudbees.plugins.deployer.DeployNowRunAction$Deployer.perform(DeployNowRunAction.java:639)
at com.cloudbees.plugins.deployer.DeployNowRunAction.run(DeployNowRunAction.java:484)
at com.cloudbees.plugins.deployer.DeployNowTask$ExecutableImpl.run(DeployNowTask.java:157)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: com.cloudbees.api.BeesClientException: Server.InternalError - Expected Application ID format: account/appname > claxon/
at com.cloudbees.api.BeesClient.readResponse(BeesClient.java:850)
at com.cloudbees.api.BeesClient.applicationDeployArchive(BeesClient.java:435)
at com.cloudbees.plugins.deployer.deployables.Deployable.deployFile(Deployable.java:124)
... 5 more
Duration: 4.1 sec
Finished: FAILURE
Anybody can help me?.
Don't put the claxon/ the account is sorted for you when you choose the account from the account drop down. just put the id part.
If you expand the help text you will see that it says not to prefix:
Note: I used my superuser rights to capture this screenshot... the only bit that might be confidential is your account name, which you had already exposed via the original question.