I have a Synapse Workspace that is getting deployed via ARM (via Ev2). I have manually created a Managed Private Endpoint to a Private Link Service. I need to be able to deploy this connection with the workspace. When I look at the json for the workspace (or when I get az synapse workspace show), I don’t see the endpoint listed, so I am not sure where to start hunting. I don’t find much info online either.
Thanks,
~john
Related
I am new to BigQuery and i'm trying to understand how VPC access works for BigQuery projects. I have a BigQuery project that imports data from several other BigQuery projects (no VPC but same organisation). I also need to connect to a project that is in a VPC network (still same organisation).
The only way that I can read this VPC project is to
Be a Gsuite member
Connect to the organisation VPN
Open the cloud console trough a proxy
I can only read the project and write queries if i'm in the VPC project itself
I want to be able to read and write queries for the VPC project in my own project
I want to be able to schedule data imports on daily aggregated data from the VPC project into my project.
Will this be possible if I add my project to a service perimeter and get access trough a perimeter bridge? What sort of access do I need to set up in order to read and import VPC project data directly in my project?
In this page you can find the BigQuery limitations when using VPC. Basically, if you want to use a service account to access a BigQuery instance protected by a service perimeter you must access from within the perimeter.
VPC Service Controls does not support copying BigQuery resources protected by a service perimeter to another organization. Access
levels do not enable you to copy across organizations.
To copy protected BigQuery resources to another organization, download the dataset (for example, as a CSV file), and then upload
that file to the other organization.
The BigQuery Data Transfer Service is supported only for the following services:
Campaign Manager
Google Ad Manager
Google Ads
Google Cloud Storage
Google Merchant Center
Google Play
YouTube
The BigQuery Classic Web UI is not supported. A BigQuery instance protected by a service perimeter cannot be accessed with the BigQuery
Classic Web UI.
The third-party ODBC driver for BigQuery cannot currently be used with the restricted VIP.
BigQuery audit log records do not always include all resources that were used when a request is made, due to the service internally
processing access to multiple resources.
When using a service account to access a BigQuery instance protected by a service perimeter, the BigQuery job must be run within a project
inside the perimeter. By default, the BigQuery client libraries will
run jobs within the service account or user's project, causing the
query to be rejected by VPC Service Controls.
Deployed Spinnaker in AWS to run a test in the same account. However unable to configure server groups. If I click create the task is queued with the account configured via hal on the CLI. Anyway to troubleshoot this, the logs are looking light.
Storage backend needs to be configured correctly.
https://www.spinnaker.io/setup/install/storage/
I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post
I have an Azure AKS cluster on which i'm trying to deploy a custom image that I have pushed to Azure Container Registry.
I have created a Service Principal and with that SP I have created my AKS. This SP also has Read Access on my ACR as described in below article:
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks
However my pods are not being created but give the message "Back-off pulling image"
Am I missing something?
Seems like I was trying to use windows containers which is not yet supported on Azure AKS. Switched to linux image and worked fine
I have use case as below.
Created a sample mule flow using smb connector as inbound which reads the files from local network on specific machine and its working fine.
Currently I have a problem that I want deploy this code to cloud hub and want to read the files from same shared location.
Can some one please guide what are the steps need to taken care?
is this achievable using VPC ?
The original answer was provided by David Dossot in the comments, however the link is outdated.
To summarize, to connect from a CloudHub application to an on-premise or private serivce you need to establish some kind of network connectivity. For that CloudHub requieres a VPC and a connectivity link. The process for the later is described at the documentation page To Request VPC Connectivity to Your Network.