I am trying to build a TFServing docker image that is less than 1GB in size. If you follow the online instructions you get an image that is about 16GB in size. You can however decrease the size to 3.5GB if you only build the model server
bazel build //tensorflow_serving/model_servers:tensorflow_model_server
Half the footprint is from the dynamic lib build products in core/kernels
root#5c275ce482e3:/# du -h -d 1 bazel-out/local-fastbuild/bin/external/org_tensorflow/tensorflow/core/kernels/
780M bazel-out/local-fastbuild/bin/external/org_tensorflow/tensorflow/core/kernels/_objs
1.8G bazel-out/local-fastbuild/bin/external/org_tensorflow/tensorflow/core/kernels/
I think this can be made much smaller since the Tensorflow Java API links to a dylib that is only 90MB (GPU)/30MB (CPU). Looking at the Bazel BUILD files it seems both the JNI/dylib and model_servers targets depends on all_kernels. I don't understand why the dylib for JNI is so small. How can I get the tfserver build to be of a comparable size?
Related
There is a desire to do the initial steps in tfjs using nodejs. At the moment, for tests, I can only use a computer with the following configuration:
Windows 7 SP1
8Gb Ram
e7500 (no AVX)
GeForce 750Ti
node v12.19.0
When using tfjs-node, I get the error:
return process.dlopen (module, path.toNamespacedPath (filename));
As far as I understand this is due to the fact that the processor is very old, without AVX.
Can I somehow rebuild tfjs-node to work on my processor, it would be ideal to build tfjs-node-gpu. If there is such an opportunity, what should I do for this?
I've come across assemblies from fo40225 (https://github.com/fo40225), but they are for Python.
Solved a problem.
First I tried changing Windows to Win10, it didn't help.
Therefore, I decided to rebuild tensorflow.dll. After many attempts, I came up with this setup:
Basel 3.1
Python 3.8
NumPy installed globally
VS BuildTools 2019
Tensorflow branch 2.3 compiled (bazel build -c opt // tensorflow / tools / lib_package: libtensorflow)
After that I copied the dll to the folder with node_modules \ #tensorflow \ tfjs-node \ lib \ napi-v6
I have an issue when I'm trying to deploy my Docker image of RASA in heroku.
here is the screenshot:
terminal
How I can avoid this large size:
I used requirement.txt to install RASA, here is the screen shot:
requirements.txt
Can you help me please?
One option is to increase the Dyno size (this is not free).
Alternatively you can build a small Docker image, for example not using the Tensorflow or Spacy model (which are pretty large).
You typically need those if you want, for example, to use their NER models (extracts names, locations, etc..)
This is an example on how to build a Rasa instance which can fit in the free-tier:
# from rasa base image
FROM rasa/rasa:1.8.0
# copy Rasa config and the Rasa generated model
COPY . /app
# script to run rasa core
COPY startup.sh /app/scripts/startup.sh
I know how to modify my image, reboot, and re-run it, but that would make my experiments very slow, since boot takes a few minutes.
Is there a way to quickly switch:
command line options
the executable
that is being run after boot?
This is not trivial because the Linux kernel knows about:
the state of the root filesystem
the state of memory, and therefore of kernel CLI options that could be used to modify init
so I can't just switch those after a checkpoint.
This question is inspired from: https://www.mail-archive.com/gem5-users#gem5.org/msg16959.html
Here is a fully automated setup that can help you to do it.
The basic workflow is as follows:
run your benchmark from the init executable that gets passed to the Linux kernel
https://unix.stackexchange.com/questions/122717/how-to-create-a-custom-linux-distro-that-runs-just-one-program-and-nothing-else/238579#238579
https://unix.stackexchange.com/questions/174062/can-the-init-process-be-a-shell-script-in-linux/395375#395375
to run a single benchmark with different parameters without
rebooting, do in your init script:
m5 checkpoint
m5 readfile | sh
and set the contents of m5 readfile on a host filesystem file before restoring the checkpoint with:
echo 'm5 resetstats && ./run-benchmark && m5 dumpstats' > path/to/script
build/ARM/gem5.opt configs/example/fs.py --script path/to/script
This is what the "configs/boot/hack_back_ckpt.rcS" but I think that
script is overly complicated.
to modify the executable without having to reboot, attach a second
disk image and mount after the checkpoint is restored:
How to attach multiple disk images in a simulation with gem5 fs.py?
Another possibility would be 9P but it is not working currently: http://gem5.org/WA-gem5
to only count only benchmark instructions, do:
m5 resetstats
./run-benchmark
m5 dumpstats
If that is not precise enough, modify the source of your benchmark with m5ops magic instructions that do resetstats and dumpstats from within the benchmark as shown at: How to count the number of CPU clock cycles between the start and end of a benchmark in gem5?
to make the first boot faster, you can boot with a simple CPU model like the default AtomicSimpleCPU and then switch to a more detailed, slower model after the checkpoint is restored: How to switch CPU models in gem5 after restoring a checkpoint and then observe the difference?
With QEMU, I can use either use -initrd '${images_dir}/rootfs.cpio for the initrd, or pass the initramfs image directly to -kernel Image.
But if I try the initramfs image with gem5 fs.py --kernel Image it fails with:
fatal: Could not load kernel file
with the exact same initramfs kernel image that QEMU was able to consume.
And I don't see an analogue to -initrd.
The only method that I got to work was to pass an ext2 disk image to --disk-image with the raw vmlinux.
https://www.mail-archive.com/gem5-users#gem5.org/msg15198.html
initrd appears unimplemented on arm and x86 at least, since gem5 must know how to load it and inform the kernel about it's location, and grepping initrdonly shows some ARM hits under:
src/arch/arm/linux/atag.hh
but they are commented out.
Communicating the initrd to the kernel now appears to be simply doable via the DTB chosen node linux,initrd-start and linux,initrd-end properties, so it might be very easy to implement: https://www.kernel.org/doc/Documentation/devicetree/bindings/chosen.txt (and gem5's existing DTB auto generation) + reusing the infrastructure to load arbitrary bytes to a memory location: How to preload memory with given raw bytes in gem5 from the command line in addition to the main ELF executable?
Initramfs doesn't work because gem5 can only boot from vmlinux which is the raw ELF file, and the initramfs images only gets attached by the kernel build to a more final image type like Image or bzImage which QEMU can use to boot, see also: https://unix.stackexchange.com/questions/5518/what-is-the-difference-between-the-following-kernel-makefile-terms-vmlinux-vml/482978#482978
Edit: the following is not needed anymore after the patch mentioned at: How to attach multiple disk images in a simulation with gem5 fs.py? To do this test, I also had to pass a dummy disk image as of gem5 7fa4c946386e7207ad5859e8ade0bbfc14000d91 since the scripts don't handle a missing --disk-image well, you can just dump some random 512 bytes and use them:
dd if=/dev/zero of=dummy.iso bs=512 count=1
I tried to retrain (new images, new classes) on top of the pretrained inception model, I therefor followed the instructions of the inception readme:
https://github.com/tensorflow/models/tree/master/inception#how-to-construct-a-new-dataset-for-retraining
I successfully built and ran build_image_data using bazel, as described in the tutorial. Afterwards I successfully built inception_train using bazel:
~/tensorflowmodels/models/inception# bazel build inception/inception_train
INFO: Found 1 target...
Target //inception:inception_train up-to-date (nothing to build)
INFO: Elapsed time: 0.073s, Critical Path: 0.00s
However, running bazel-bin/inception/inception_train I always get the following:
~/tensorflowmodels/models/inception# bazel-bin/inception/inception_train --train_dir="/" --validation_dir="/" --data_dir="/images_jpg/" --pretrained_model_checkpoint_path="/tensorflowmodels/models/inception/inception-v3/" --fine_tune=True --initial_learning_rate=0.001 --input_queue_memory_factor=1 --num_gpus=1
-bash: bazel-bin/inception/inception_train: No such file or directory
Naturally I would say it's by 99.9999% chance a typo. So then I tried to run inception_train.py with python. I had to change some import locations, and it finally ran with the parameters. However the script stops without any error messages after the initialization of the CUDA drivers.
Any help on how to solve this (or perform fine tuning / retraining with inception) would be very much appreciated.
tensorflow version: 0.9rc0
CPU: Xeon 5, 24 cores
GPU: Grid K2 8 GB
OS: Ubuntu 14.04
BTW I posted this already as an Github issue (which was closed, since it would be more a case for Stack Overflow).