I'm trying to compile pb files for GRPC calls to Tensorflow Serving (in php, but the question is not PHP related)
The file serving/tensorflow_serving/apis/predict.proto has:
import "tensorflow/core/framework/tensor.proto";
import "tensorflow_serving/apis/model.proto";
However in a normal setup tensorflow and tensorflow serving are not located in a hierarchy that has a common folder from which the two import can work together.
Assuming that compiling the proto files to pb files for grpc keeps the hirarchy , it cannot work without locating tensorflow serving under /tensorflow/. What am I missing here?
What is the best practice in order to compile pb files for grpc client?
Another issue: if the pb files are created - they include the imports with same hirarchy so it will force the same folder structure on the client ??? this is against the meaning of GRPC which is isolation and seperation between the entities.
I don't know anything about tensorflow, but i'm approaching the problem from a just-another-protobuf-creation point of view. Here https://github.com/tensorflow/serving i'm seeing both tensorflow_serving and a submodule tensorflow which is a root of your desired dependency (i.e. it has another tensorflow subfolder in it). So i guess that you either missed some configuration step, which would have copied the folder into right relative location, or you are running an incomplete/incorrect protoc command line, i.e. you are missing some -I <path>
Related
I try to use import in my *.proto instructions for Protobuf. In my app, I have a few plugins with these *.proto instructions. But IntelliJ with Protocol Buffers marks imports and classes from the imported files as red.
I found, if I add in Preferences->Languages & Frameworks->Protocol Buffers strong location to my *.proto it will be green. But I have many places with *.proto, for example for each plugin in my app, and some of this *.proto can have an equal name, like featuresDto for each plugin. How can I add library-dependent location instead of strong?
However, the protobuf-generator works as expected.
[screenshot]
I didn't get correctly what do you mean by library-dependent location - could you please explain?
As for protobuf plugin settings: there you can find all directories, where you'll be able to import files from. E.g. if the protobuf file is located in /foo/bar/MyService.proto, and you want to be able to import it as import "bar/MyService.proto";, you should add /foo/ directory to the plugin settings.
In fact those settings mean exactly the same as directories one specifies with proto-path cli argument: if any proto file can not be resolved against one of given proto-paths, compilation will not succeed.
Also you may find useful the following issues about simplifying solving imports-related problems: https://youtrack.jetbrains.com/issue/IDEA-283099 and https://youtrack.jetbrains.com/issue/IDEA-283097
This is kind of embarassing because of how simple and common this problem is but I feel I've checked everything.
WSGI file is located at: /var/www/igfakes/server.wsgi
Apache is complaining that I can't import the module of my project, so I decided to start up a Python shell and see if it's any different - nope.
All the proof is in the following screenshot, I'll walk you through it.
First, see I cannot import my project
Then I import sys and check the path
Note /var/www in the path
Leave python
Check the directory, then confirm my project is in that same directory
My project is exactly where I'm specifying. Any idea what's going on?
I've followed a few different tutorials, all with the same instructions, like this one.
python circuits is a great framework, I am not familiar with its component
loading machanism
apart from loads from the /site-data folder for installed python modules through pip, I guess thats where the PYTHONPATH is set
is it loads from else where? e.g the current path when I do invoke python app.py?
It uses safe_import which ultimately uses __import__ from the Python builtins. This means your components have to be importable via the normal Python import paths.
I'm attempting to train the NER within SpaCy to recognize a new set of entities. Everything works just fine until I try to save and reload the model.
I'm attempting to follow the SpaCy doc recommendations from https://spacy.io/usage/training#saving-loading, so I have been saving with:
model.to_disk("save_this_model")
and then going to the Command Line and attempting to turn it into a package using:
python -m spacy package save_this_model saved_model_package
so I can then use
spacy.load('saved_model_package')
to pull the model back up.
However, when I'm attempting to use spacy package from the Command Line, I keep getting the error message "Can't locate model data"
I've looked in the save_this_model file and there is a meta.json there, as well as folders for the various pipes (I've tried this with all pipes saved and the non-NER pipes disabled, neither works).
Does anyone know what I might be doing wrong here?
I'm pretty inexperienced, so I think it's very possible that I'm attempting to make a package incorrectly or committing some other basic error. Thank you very much for your help in advance!
The spacy package command will create an installable and loadable Python package based on your model data, which you can then pip install and store in a single .tar.gz file. If you just want to load a model you've saved out, you usually don't even need to package it – you can simply pass the path to the model directory to spacy.load. For example:
nlp = spacy.load('/path/to/save_this_model')
spacy.load can take either a path to a model directory, a model package name or the name of a shortcut link, if available.
If you're new to spaCy and just experimenting with training models, loading them from a directory is usually the simplest solution. Model packages come in handy if you want to share your model with others (because you can share it as one installable file), or if you want to integrate it into your CI workflow or test suite (because the model can be a component of your application, like any other package it depends on).
So if you do want a Python package, you'll first need to build it by running the package setup from within the directory created by spacy package:
cd saved_model_package
python setup.py sdist
You can find more details here in the docs. The above command will create a .tar.gz archive in a directory /dist, which you can then install in your environment.
pip install /path/to/en_example_model-1.0.0.tar.gz
If the model installed correctly, it should show up in the installed packages when you run pip list or pip freeze. To load it, you can call spacy.load with the package name, which is usually the language code plus the name you specified when you packaged the model. In this example, en_example_model:
nlp = spacy.load('en_example_model')
Inside the tensorflow/models/tutorials/rnn/translate folder, we have a few files including __init__.py and BUILD.
Without __init__.py and BUILD files, the translate script can still manage to run.
What is the purpose of __init__.py and BUILD here? Are we supposed to install or build it using these two files?
The BUILD file supports using Bazel for hermetic building and testing of the model code. In particular a BUILD file is present in that directory to define the integration test translate_test.py and its dependencies, so that we can run it on continuous integration system (e.g. Jenkins).
The __init__.py file causes Python to treat that directory as a package. See this question for a discussion of why __init__.py is often present in a Python source directory. While this file is not strictly necessary to invoke translate.py directly from that directory, it is necessary if we want to import the code from translate.py into a different module.
(Note that when you run a Python binary through Bazel, the build system will automatically generate __init__.py files if they are missing. However, TensorFlow's repositories often have explicit __init__.py files in Python source directories so that you can run the code without invoking Bazel.)