I have uploaded my train.csv and valid.csv into colab using the files.upload() snippet:
User uploaded file "valid.txt" with length 3387762 bytes
User uploaded file "train.txt" with length 9401172 bytes
Running some tensorflow code that runs ok locally and fetches files in the current directory, causes the following error in Colab:
InvalidArgumentError: assertion failed: [string_input_producer requires a non-null input tensor]
[[Node: input_producer/Assert/Assert = Assert[T=[DT_STRING], summarize=3, _device="/job:localhost/replica:0/task:0/device:CPU:0"](input_producer/Greater, input_producer/Assert/Assert/data_0)]]
I assume the code can't see the files? What's the path to the uploaded files?
Do the answers on this question help?
How to import and read a shelve or Numpy file in Google Colaboratory?
(The files.upload stores uploaded files in memory. To work with them as files on your filesystem, you'll need to save them explicitly.)
Related
I trained a StyleGan2-ADA model on a custom dataset which generated a .pkl file. I'm now trying to load the .pkl file so that I can convert it to a .pt file, but when I load the .pkl file using:
pickle.load(f)
I'm getting a ModuleNotFoundError: No module named 'torch_utils.persistence'
I've installed torch_utils and other dependencies, but for loading the file I'm not sure how to fix this issue. If anyone has had this issue in loading a .pkl file any help would be greatly appreciated!!
Same issue on Github here, but no clear solution.
Have tried installing torch_utils multiple times, but error still persists
I have a code in Google Colab which uses a Python package called Atlite, which in turn retrieves data from the Climate Data Store (CDS) through the use of an API key.
When running this code in Python I just need to have the file containing the key saved in a specific folder and then the code runs perfectly fine.
When I try to run the code in Google Colab the following error arises:
**Exception: Missing/incomplete configuration file: /root/.cdsapirc
**
I have the file ".cdsapirc" in my computer but when I try to import it to the "/root" folder in Google Colab it just does not get imported. I can import a .py file, but when I try to import the ".cdsapirc" file (which is basically a txt file) it does not work.
Could someone please help me to solve this issue?
Thank you!
Regards,
Sebastian
If uploading the .cdsapirc file doesn't work, you could try creating it inside Google Colab using a simple Python script:
uid = "<your uid>"
apikey = "<your api-key"
with open("/root/.cdsapirc", "w") as f:
print("url: https://cds.climate.copernicus.eu/api/v2", file=f)
print(f"key: {uid}:{apikey}", file=f)
You can get the uid and apikey either from CDS after logging in or you open you local .cdsapirc file and look them up there, see here for more information.
There might be a nicer solution by someone more familiar with Google Colab though.
I was trying to read csv file in jupyter notebook but it showed error of filenotfound. Then I tried to check whether my file is present then it shoewd false as output. But I have checked the file location in my files explorer and the csv file is present .How should I read the file?
import os
os.path.isfile(r'C:\Users\Ritesh\Downloads\Data\Amazon_Products.csv')
Screenshot of code and error
maybe try:
import pandas as pd
df = pd.read_csv('your-filepath')
you could also try to move the file into your project directory so that it is in the same folder as the .ipynb
I want to use FaceNet as a embedding layer (which won't be trainable).
I tried loading FaceNet like so :
tf.keras.models.load_model('./path/tf_facenet')
where directory ./path/tf_facenet contains 4 files that can be downloaded at https://drive.google.com/file/d/0B5MzpY9kBtDVZ2RpVDYwWmxoSUk/edit
but a message error shows up :
OSError: SavedModel file does not exist at: ./path/tf_facenet/{saved_model.pbtxt|saved_model.pb}
And the h5 files downloaded from https://github.com/nyoki-mtl/keras-facenet doesn't seem to work either (they use tensorflow 1.3)
I had issued like you when load model facenet-keras. Maybe you python env missing h5py modules.
So you should install that conda install h5py
Hope you success!!!
exportDir = "gs://testbucket/export/";
SavedModelBundle b = SavedModelBundle.load(exportDir, "serve");
Gives me the error :
org.tensorflow.TensorFlowException: SavedModel not found in export directory: gs://testbucket/export/
copying the saved_model.pb to a local directory and then providing path to the local filesystem works.
Where as
tf.saved_model.loader.load(session, [tf.saved_model.tag_constants.SERVING], export_dir)
This works with gcs bucket. Does any one know if loading models using svedmodelBundle api does not support gcs bucket ? How can i load saved_model.pb and variables from gcs bucket without copying them over to local filesystem in java
tf.saved_model.tag_constants.SERVING is the string 'serving_default'
Perhaps that's the problem?