I am running one of the oneVPL samples- hello-vpp.
I've downloaded the samples source code from oneAPI samples repository-
https://github.com/oneapi-src/oneAPI-samples.git
My OS is Ubuntu 18.04.
After build, I tried the below command to get the output.
./hello-vpp ../content/input.i420 640 480
and I got the below error.
Could not create output file
Processed 0 frames
Any corrections in my command? What would be the expected output?
VPP sample only works with i420 video format and it's size must be 128x96. So the right command to run hello-vpp sample is:
./hello-vpp ../content/input.i420 128x96
The expected output would be
Found ApiVersion: 2.2
SW session created
Processing ../content/input.i420 -> out.i420
Processed 60 frames
The output file i.e. out.i420 would be found in the build directory(Path: "\Libraries\oneVPL\hello-vpp\build") and its size is 640x480 by default.
Related
I am using gdal_rasterize and ogr2ogr with a goal to get a partial raster of .gpkg file.
With first command I want to clip a smaller area of a large map.
ogr2ogr -spat xmin ymin xmax ymax out.gpkg in.gpkg
This results in a file that with command ogrinfo out.gpkg gives expected output by listing the layers numbers and names.
Then trying to rasterize this new file with:
gdal_rasterize out.gpkg -burn 255 -ot Byte -ts 250 250 -l anylayer out.tif
results in: ERROR 1: Cannot get layer extent when tried with any of the layers names given by ogrinfo
Using the same command on the original in.gpkg doesnt give errors and results in expected .tiff file raster.
ogr2ogr --version GDAL 2.4.2, released 2019/06/28
gdal_rasterize --version GDAL 2.4.2, released 2019/06/28
This process should at the end be implemented with the gdal C++ API.
Are the commands given some how invalid this way, how?
Should the whole process be done differently, how?
What does the ERROR 1: Cannot get layer extent mean?
I am trying to use spacy's 'pre-train' feature for a NER task, so here is what I tried doing(I am still trying to use it),
Step 1: I started by initializing the model with 'en_core_web_lg' next I saved this model to disk and tested its NER capability on few lines to see if it recognizes the tags in those test lines. (Made a note of ignored tags)
Step 2: Next I created a .jsonl file with new data to train on (about 20 new lines, I wanted to see the model's capability given new data around an entity(ignored tags found earlier) will it be able to correctly identify tags after doing transfer learning). So using this .jsonl and the model I saved earlier file I used 'spacy pre-train' command to train, this created a token2vec .bin file for me (model999.bin).
Step 3: Next I created a function that takes the location of an earlier saved model(model saved in step 1) and location of token2vec (model999.bin file obtained in step 2). Inside the function it loads the model>creates/gets pipe>disables rest of the files>uses (pipe_name).model.tok2vec.from_bytes(file_.read()) to read from model999.bin and broadcast the learned vectors to base model.
But when I run this function, I get this error:
ValueError: could not broadcast input array from shape (96,3,384) into shape (96,3,480)
(I have uploaded the entire notebook here: [https://github.com/pratikdk/ner_test/blob/master/base_model_contextual_TF.ipynb ]).
In order to pre-train I used this function
python -m spacy pre-train ub.jsonl model_saves w2s
Here are the 20 lines I tried training on top of the base model
[ https://github.com/pratikdk/ner_test/blob/master/ub.jsonl ]
What am I doing wrong here exactly? Please can you also point the fix, I am sure many would need insight on this.
Environment
Operating System: CentOS
Python Version Used: 3.7.3
spaCy Version Used: 2.1.3
Environment Information: Anaconda Jupyter Lab
So I was able to fix this, the developer(on github) answered my question.
Here is the answer:
https://github.com/explosion/spaCy/issues/3616
I am going through the training tutorial on retraining Inception's final layer after having installed Tensorflow for Ubuntu with regular CPU support. I successfully made the flower examples work however after switching to a new set of categories with ten sub-folders I cannot make Inception produce ten scores for each input image rather than the default five. My current command line to run a test image looks like this, working with headers labelled 0-9.
bazel build tensorflow/examples/label_image:label_image && \
bazel-bin/tensorflow/examples/label_image/label_image \
--graph=/tmp/output_graph.pb --labels=/tmp/output_labels.txt \
--output_layer=final_result \ --input_layer=Mul
--image=$HOME/Input/Example.jpg
Which produces as a result
5 (4): 0.642959
3 (2): 0.243444
9 (8): 0.0513504
4 (5): 0.0231318
6 (7): 0.0180509
However I cannot find anything in the programs that Inception runs to reconfigure how many output scores are produced so that all ten of my categories have scores rather than just five. How do I change this?
I tried with 8 categories and was able to get result for all of them.
If your code has below line
top_k = predictions[0].argsort()[-5:][::-1]
change it to
top_k = predictions[0].argsort()[-len(predictions[0]):][::-1]
If code contains predictions = np.squeeze(predictions) then use predictions instead of predictions[0]
I have run this using following command instead of bazel and I found it easier.
python /path_to_file/label_image.py /path_to_image/image.jpeg
First make sure that graph is created after you run retrain.py and it is at the correct location. (default is inside /tmp/).
I've tried searching the internet for inputs on this one, but ineffectively.
I am using libSVM (https://www.csie.ntu.edu.tw/~cjlin/libsvm/) and I've encountered this while training the SVM with rbf kernel.
If a feature contains very small numbers, like feature 15 in the following
0 1:4.25606e+07 2:4.2179e+07 3:5.1059e+07 4:7.72388e+06 5:7.72037e+06 6:8.87669e+06 7:4.40263e-06 8:0.0282494 9:819 10:2.34513e-05 11:21.5385 12:95.8974 13:179.117 14:9 15:6.91877e-310
libSVM will fail reading the file with the error code Wrong input at line <lineID>.
After some testing, I was able to confirm that changing such a small number to 0 appears to fix the error. i.e. this line is correctly read:
0 1:4.17077e+07 2:4.12838e+07 3:5.04597e+07 4:7.76011e+06 5:7.74881e+06 6:8.91813e+06 7:3.97472e-06 8:0.0284308 9:936 10:2.46506e-05 11:22.8714 12:100.969 13:186.641 14:17 15:0
Can anybody help me figure out why this is happening? My file contains a lot of number around that order of magnitude.
I am calling the SVM via terminal on Ubuntu like:
<path to>/svm-train -s 0 -t 2 -g 0.001 -c 100000 <path to features file> <path for output model file>
I was running a very long training (reinforcement learning with 20M steps) and writing summary every 10k steps. In between step 4M and 6M, I saw 2 peaks in my TensorBoard scalar chart for game score, then I let it run and went to sleep. In the morning, it was running at about step 12M, but the peaks between step 4M and 6M that I saw earlier disappeared from the chart. I tried to zoom in and found out that TensorBoard (randomly?) skipped some of the data points. I also tried to export the data but some data point including the peaks are also missing in the exported .csv.
I looked for answers and found this in TensorFlow github page:
TensorBoard uses reservoir sampling to downsample your data so that it can be loaded into RAM. You can modify the number of elements it will keep per tag in tensorboard/backend/server.py.
Has anyone ever modified this server.py file? Where can I find the file and if I installed TensorFlow from source, do I have to recompile it after I modified the file?
You don't have to change the source code for this, there is a flag called --samples_per_plugin.
Quoting from the help command
--samples_per_plugin: An optional comma separated list of plugin_name=num_samples pairs to explicitly
specify how many samples to keep per tag for that plugin. For unspecified plugins, TensorBoard
randomly downsamples logged summaries to reasonable values to prevent out-of-memory errors for long
running jobs. This flag allows fine control over that downsampling. Note that 0 means keep all
samples of that type. For instance, "scalars=500,images=0" keeps 500 scalars and all images. Most
users should not need to set this flag.
(default: '')
So if you want to have a slider of 100 images, use:
tensorboard --samples_per_plugin images=100
The comment is out of date - it can actually be modified in tensorboard/backend/application.py, in the "Default Size Guidance". By default, it stores 1000 scalars. You can increase that limit arbitrarily, or set it to 0 to store every scalar.
You don't need to recompile TensorBoard, or even download it from source. You could just modify this file in your TensorBoard yourself.
If you install TensorFlow using pip in virtualenv (ubuntu, mac), then within your virtualenv directory the path to application.py should be something like lib/python2.7/site-packages/tensorflow/tensorboard/backend. If you modify that file, you should get the new setting in your tensorboard (when you run tensorboard in that virtualenv). If you're like me, you'll put a print statement too so you can be sure that you're running modified code :)