with strategy.scope():
model = transformer(vocab_size=VOCAB_SIZE,
num_layers=NUM_LAYERS,
units=UNITS,
d_model=D_MODEL,
num_heads=NUM_HEADS,
is_encoder=True,
dropout=DROPOUT)
model.load_weights("path")
I get error:
InvalidArgumentError: Unsuccessful TensorSliceReader constructor: Failed to get matching files on path: UNIMPLEMENTED: File system scheme '[local]' not implemented (file: 'path')
TL;DR: You need to use Cloud Storage (GCS), which is a Google Cloud Platform (GCP) service.
As stated in the Cloud TPU documentation (https://cloud.google.com/tpu/docs/troubleshooting/trouble-tf#cannot_use_local_filesystem), TPU servers do not have access to your local storage; they can only see files in GCS buckets.
You need to place all the data used for training a model (or inference, depending on your intent) in a GCS bucket, including pretrained weights and the dataset. Note that GCS is a paid service, albeit not very pricey (and first-time users get a trial period).
Links to GCP officlal docs below might help you get started:
Create storage buckets
Connecting to Cloud Storage Buckets
Related
I'm attempting to download and use the Google Landmark v2 dataset using TensoFlow Federated with the following code:
train, test = tff.simulation.datasets.gldv2.load_data(gld23k=True)
At some point during the download this error occurs:
ValueError: Incomplete or corrupted file detected. The md5 file hash does not match the provided value of 825975950b2e22f0f66aa8fd26c1f153 images_000.tar.
I've tried on Google CoLab and my personal machine but the same error occurs.
Is there anyway to get around this issue?
Thanks any help appreciated.
I've found this nice article on how to directly stream data from Google Storage to tf.data. This is super handy if your compute tier has limited storage (like on KNative in my case) and network bandwidth is sufficient (and free of charge anyway).
tfds.load(..., try_gcs=True)
Unfortunately, my data resides in a non Google bucket and it isn't documented for other Cloud Object Store systems.
Does anybody know if it also works in non GS environments?
I'm not sure how this is implemented in the library, but it should be possible to access other object store systems in a similar way.
You might need to extend the current mechanism to use a more generic API like the S3 API (most object stores have this as a compatibility layer). If you do need to do this, I'd recommend contributing it back upstream, as it seems like a generally-useful capability when either storage space is tight or when fast startup is desired.
I have an OD Model trained on a custom dataset. I would like to deploy the model as an API. The model will be used in real-time inference and I am planning on monetizing this API on one of the API marketplaces such as AWS, Rakuten's Rapid API, etc.
My concern is if the OD Model is provided as an API, performing predictions in real-time on a video stream (surveillance camera feed) will bring network latency that will make the app slower. Are there any other alternatives to solve the latency issues?
For instance, If I package the code and artifacts to be executed on the client's system, network latency can be eliminated but at the risk of exposing the model, code, etc. So API seems to be the ideal solution for my use case.
What would be the best approach to execute such a scenario?
Moreover, if pre-processing and post-processing are involved for the images. Are there any platforms that aid to package our application and converting it as a black box that takes image inputs and provides image outputs?
For AWS Marketplace, you can sell a Amazon SageMaker "model package" product, a pre-trained model for making predictions that does not require any further training by the buyer.
This should address your concerns on intellectual-property protection and somewhat address your concerns on latency.
Regarding intellectual-property protection, you as the seller package your model inside a Docker container. When it is deployed in the buyer's Amazon SageMaker service in their AWS account, they have no direct access to the container. They can only interact with your model via the SageMaker APIs. More info here: https://docs.aws.amazon.com/marketplace/latest/userguide/ml-security-and-intellectual-property.html
Regarding latency, the model package is deployed in the buyer's AWS account in the region of their choosing. Although the model package cannot be deployed onto edge devices, it brings it one step closer instead of where you as the seller hosts the API.
For more information on publishing Amazon SageMaker products on AWS Marketplace, see the seller guide: https://docs.aws.amazon.com/marketplace/latest/userguide/machine-learning-products.html
I've uploaded my trained model to the Google Cloud Platform that I trained and exported on lobe.ai. Now I want to send a test request with an image to it so I can use it on my web application. How do I do this?
With your tensorflow (I deduce this from your tags) model, you have 2 solutions
Either your test locally
Or you can deploy your model on AI Platform in online prediction mode.
In both cases, you have to submit a binary + your features in a JSON instance according with your model inputs
When using TPUs on Google Colab (such as in the MNIST example), we are told to create a GCS bucket. However, it doesn't tell us where. Without knowing the region/zone of the Colab instance, I am afraid to create a bucket in fear of running into billing issues.
There are actually several questions:
Is accessing a GCS bucket from Colab free, or do the normal network egress fees apply?
Can I get the region/zone of the colab instance? Most likely not.
If the question to both questions above is "no": is there any solution for minimizing costs when using TPUs with Colab?
Thank you for your question.
No, you can not get the region/zone of the colab instance. So you can try creating a multi-regional GCS bucket which should be accessible by the colab. As per the comment, https://github.com/googlecolab/colabtools/issues/597#issuecomment-502746530, Colab TPU instances are only in US zone. So while creating a GCS bucket, you can choose a Multi-region bucket in US.
Checkout https://cloud.google.com/storage/pricing to get more details about the pricing information for the GCS buckets.
You can also sign up for a Google Cloud Platform account with 5GB of free storage and $300 in credits at https://cloud.google.com/free/, so that should be able to provide you with enough credits to get started.
We are told to create a GCS bucket. However, it doesn't tell us where.
Running (within Colab)
!curl ipinfo.io
You get something similar to
{
"ip": "3X.20X.4X.1XX",
"hostname": "13X.4X.20X.3X.bc.googleusercontent.com",
"city": "Groningen",
"region": "Groningen",
"country": "NL",
"loc": "53.21XX,6.56XX",
"org": "AS396XXX Google LLC",
"postal": "9711",
"timezone": "Europe/Amsterdam",
"readme": "https://ipinfo.io/missingauth"
}
Which tells you where you Colab is running.
You can create a GCS bucket in just one region (if you don't need multi-region).
Assuming you don't change country/area very often, you can check that a few times (different days) and get an idea of where your Colab is probably going to be located.
For your other questions (egress,...) see the Conclusion on
https://ostrokach.gitlab.io/post/google-colab-storage/
[...] Google Cloud Storage is a good option for hosting our data. Only
we should be sure to check that the Colab notebook is running in the
same continent as your Cloud Storage bucket, or we will incur network
egress charges!