Fail to connect to VREP remote api in google colab - google-colaboratory

I want to connect to VREP using the python remote api in google colab. I tried to run the sample code below in jupyter notebook and it is working. However, when I change to google colab, the remote api fails to connect to VREP even though the code is the same.
import vrep
vrep.simxFinish(-1)
clientID = vrep.simxStart('127.0.0.1', 19997, True, True, 500, 5)
if clientID != -1: # if we connected successfully
print ('Connected to remote API server')
else:
print('Fail to connect')

Related

Does AWS Sagemaker supports gRPC prediction requests?

I deployed a Sagemaker's Tensorflow model from an estimator in local mode and when trying to call the Tensorflow Serving (TFS) predict endpoint using gRPC I get the error:
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
Im doing the gRPC request exactly as in this blog post:
import grpc from tensorflow.compat.v1
import make_tensor_protofrom tensorflow_serving.apis
import predict_pb2from tensorflow_serving.apis
import prediction_service_pb2_grpc
grpc_port = 9000 # Tried also with other ports such as 8500
request = predict_pb2.PredictRequest()
request.model_spec.name = 'model'
request.model_spec.signature_name = 'serving_default'
request.inputs['input_tensor'].CopyFrom(make_tensor_proto(instance))
options = [
('grpc.enable_http_proxy', 0),
('grpc.max_send_message_length', MAX_GRPC_MESSAGE_LENGTH),
('grpc.max_receive_message_length', MAX_GRPC_MESSAGE_LENGTH)
]
channel = grpc.insecure_channel(f'0.0.0.0:{grpc_port}', options=options)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
result_future = stub.Predict.future(request, 30)
output_tensor_proto = result_future.result().outputs['predictions']
output_shape = [dim.size for dim in output_tensor_proto.tensor_shape.dim]
output_np = np.array(output_tensor_proto.float_val).reshape(output_shape)
prediction_json = {'predictions': output_np.tolist()}
Looking at the Sagemaker's docker container where TFS is running, I see in the logs that the rest endpoint is exported/exposed, but not the gRPC one, although it seems to be running:
ensorflow_serving/model_servers/server.cc:417] Running gRPC ModelServer at 0.0.0.0:9000 ...
Unlike for gRPC, in the container logs I can see the rest endpoint is exported:
tensorflow_serving/model_servers/server.cc:438] Exporting HTTP/REST API at:localhost:8501 ...
Does Sagemaker TFS containers even support gRPC? How can one make a gRPC TFS prediction request using Sagemaker?
SageMaker endpoints are REST endpoints. You can however make gRPC connections within the container. You cannot make the InvokeEndpoint API call via gRPC.
If you are using the SageMaker TensorFlow container, you need to pass an inference.py script that contains the logic to make the gRPC request to TFS.
Kindly see this example inference.py script that makes a gRPC prediction against TensorFlow Serving.

Issues in opening the mlflow ui with pygrok

I am very new to MLops and Ml flow. I am trying to use MLflow on google Colab to track models. however, i am not able to open the ui on the local server.
I got a couple of errors:
The connection to http:xxxx was successfully tunneled to your ngrok client, but the client failed to establish a connection to the local address localhost:80.
Make sure that a web service is running on localhost:80 and that it is a valid address.
The error encountered was: dial tcp 127.0.0.1:80: connect: connection refused
Post this error, i did certain changes to the environment and downloaded ngrok.
and provided the auth token to NGROK_AUTH_TOKEN = "xxxx"
Now i am getting the below message:
The code that i am using is:
!pip install pyngrok --quiet
from pyngrok import ngrok
ngrok.kill()
NGROK_AUTH_TOKEN = ""
ngrok.set_auth_token(NGROK_AUTH_TOKEN)
public_url = ngrok.connect(port="127.0.0.1:5000 ", proto="http", options={"bind_tls": True})
print("MLflow Tracking UI:", public_url)
Any help is highly appreciated.
TIA...

How do you SSH into a Google Compute Engine VM instance with Python rather than the CLI?

I want to SSH into a GCE VM instance using the google-api-client. I am able to start an instance using google-api-client with the following code:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
project = 'my_project'
zone = 'us-west2-a'
instance = 'my_instance'
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
In the command line the above code is rendered as:
gcloud compute instances start my_instance
Similarly, to SSH into a GCE VM instance with the command line one writes:
gcloud init && gcloud compute ssh my_instance --project my_project --verbosity=debug --zone=us-west2-a
I've already got the SSH keys set up and all that.
I want to know how to write the above command line in Google Api Client or Python.
There is no official REST API method to connect to a Compute Engine instance with SSH. But assuming you have the SSH keys configured as per the documentation, in theory, you could use a third-party tool such as Paramiko. Take a look at this post for more details.

Enable Cloud Vision API to access a file on Cloud Storage

i have already seen there are some similar questions but none of them actually provide a full answer.
Since I cannot comment in that thread, i am opening a new one.
How do I address Brandon's comment below?
"...
In order to use the Cloud Vision API with a non-public GCS object,
you'll need to send OAuth authentication information along with your
request for a user or service account which has permission to read the
GCS object."?
I have the json file the system gave me as described here when I created the service account.
I am trying to run the api from a python script.
It is not clear how to use it.
I'd recommend to use the Vision API Client Library for python to perform the call. You can install it on your machine (ideally in a virtualenv) by running the following command:
pip install --upgrade google-cloud-vision
Next, You'll need to set the environment variable GOOGLE_APPLICATION_CREDENTIALS to the file path of the JSON file that contains your service account key. For example, on a Linux machine you'd do it like this:
export GOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Finally, you'll just have to call the Vision API client's method you desire (for example here the label_detection method) like so:
def detect_labels():
"""Detects labels in the file located in Google Cloud Storage."""
client = vision.ImageAnnotatorClient()
image = types.Image()
image.source.image_uri = "gs://bucket_name/path_to_image_object"
response = client.label_detection(image=image)
labels = response.label_annotations
print('Labels:')
for label in labels:
print(label.description)
By initialyzing the client with no parameter, the library will automatically look for the GOOGLE_APPLICATION_CREDENTIALS environment variable you've previously set and run on behalf of this service account. If you granted it permissions to access the file, it'll run successfully.

Configuring Google cloud bucket as Airflow Log folder

We just started using Apache airflow in our project for our data pipelines .While exploring the features came to know about configuring remote folder as log destination in airflow .For that we
Created a google cloud bucket.
From Airflow UI created a new GS connection
I am not able to understand all the fields .I just created a sample GS Bucket under my project from google console and gave that project ID to this Connection.Left key file path and scopes as blank.
Then edited airflow.cfg file as follows
remote_base_log_folder = gs://my_test_bucket/
remote_log_conn_id = test_gs
After this changes restarted the web server and scheduler .But still my Dags is not writing logs to the GS bucket .I am able to see the logs which is creating logs in base_log_folder .But nothing is created in my bucket .
Is there any extra configuration needed from my side to get it working
Note: Using Airflow 1.8 .(Same issue I faced with AmazonS3 also. )
Updated on 20/09/2017
Tried the GS method attaching screenshot
Still I am not getting logs in the bucket
Thanks
Anoop R
I advise you to use a DAG to connect airflow to GCP instead of UI.
First, create a service account on GCP and download the json key.
Then execute this DAG (you can modify the scope of your access):
from airflow import DAG
from datetime import datetime
from airflow.operators.python_operator import PythonOperator
def add_gcp_connection(ds, **kwargs):
"""Add a airflow connection for GCP"""
new_conn = Connection(
conn_id='gcp_connection_id',
conn_type='google_cloud_platform',
)
scopes = [
"https://www.googleapis.com/auth/pubsub",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/cloud-platform",
]
conn_extra = {
"extra__google_cloud_platform__scope": ",".join(scopes),
"extra__google_cloud_platform__project": "<name_of_your_project>",
"extra__google_cloud_platform__key_path": '<path_to_your_json_key>'
}
conn_extra_json = json.dumps(conn_extra)
new_conn.set_extra(conn_extra_json)
session = settings.Session()
if not (session.query(Connection).filter(Connection.conn_id ==
new_conn.conn_id).first()):
session.add(new_conn)
session.commit()
else:
msg = '\n\tA connection with `conn_id`={conn_id} already exists\n'
msg = msg.format(conn_id=new_conn.conn_id)
print(msg)
dag = DAG('add_gcp_connection', start_date=datetime(2016,1,1), schedule_interval='#once')
# Task to add a connection
AddGCPCreds = PythonOperator(
dag=dag,
task_id='add_gcp_connection_python',
python_callable=add_gcp_connection,
provide_context=True)
Thanks to Yu Ishikawa for this code.
Yes, you need to provide additional information for both, S3 and GCP connection.
S3
Configuration is passed via extra field as JSON. You can provide only profile
{"profile": "xxx"}
or credentials
{"profile": "xxx", "aws_access_key_id": "xxx", "aws_secret_access_key": "xxx"}
or path to config file
{"profile": "xxx", "s3_config_file": "xxx", "s3_config_format": "xxx"}
In case of the first option, boto will try to detect your credentials.
Source code - airflow/hooks/S3_hook.py:107
GCP
You can either provide key_path and scope (see Service account credentials) or credentials will be extracted from your environment in this order:
Environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to a file with stored credentials information.
Stored "well known" file associated with gcloud command line tool.
Google App Engine (production and testing)
Google Compute Engine production environment.
Source code - airflow/contrib/hooks/gcp_api_base_hook.py:68
The reason for logs not being written to your bucket could be related to service account rather than config on airflow itself. Make sure it has access to the mentioned bucket. I had same problems in the past.
Adding more generous permissions to the service account, e.g. even project wide Editor and then narrowing it down. You could also try using gs client with that key and see if you can write to the bucket.
For me personally this scope works fine for writing logs: "https://www.googleapis.com/auth/cloud-platform"