Is there a way to display the average of multiple different runs on tensorflow?
I can only see them on the same graph (by sending the path of the different runs), but I want to see their average on the graph
As #dga mentioned this is not implemented yet. Here is some code that uses EventAccumulator to combine scalar tensorflow summary values. This can be extended to accommodate the other summary types.
import os
from collections import defaultdict
import numpy as np
import tensorflow as tf
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator
def tabulate_events(dpath):
summary_iterators = [EventAccumulator(os.path.join(dpath, dname)).Reload() for dname in os.listdir(dpath)]
tags = summary_iterators[0].Tags()['scalars']
for it in summary_iterators:
assert it.Tags()['scalars'] == tags
out = defaultdict(list)
for tag in tags:
for events in zip(*[acc.Scalars(tag) for acc in summary_iterators]):
assert len(set(e.step for e in events)) == 1
out[tag].append([e.value for e in events])
return out
def write_combined_events(dpath, d_combined, dname='combined'):
fpath = os.path.join(dpath, dname)
writer = tf.summary.FileWriter(fpath)
tags, values = zip(*d_combined.items())
timestep_mean = np.array(values).mean(axis=-1)
for tag, means in zip(tags, timestep_mean):
for i, mean in enumerate(means):
summary = tf.Summary(value=[tf.Summary.Value(tag=tag, simple_value=mean)])
writer.add_summary(summary, global_step=i)
writer.flush()
dpath = '/path/to/root/directory'
d = tabulate_events(dpath)
write_combined_events(dpath, d)
This solution assumes a directory structure like the following:
dpath
├── 1
│ └── events.out.tfevents.1518552132.Alexs-MacBook-Pro-2.local
├── 11
│ └── events.out.tfevents.1518552180.Alexs-MacBook-Pro-2.local
├── 21
│ └── events.out.tfevents.1518552224.Alexs-MacBook-Pro-2.local
├── 31
│ └── events.out.tfevents.1518552264.Alexs-MacBook-Pro-2.local
└── 41
└── events.out.tfevents.1518552304.Alexs-MacBook-Pro-2.local
Please follow issue 376 to see progress on this. It's an active feature request with some progress in the last month, but as of now, there's not a way to do what you want. Yet.
Since there is still no build in functionality to do this I released a tool for that:
https://github.com/Spenhouet/tensorboard-aggregator
This tool can aggregate multiple tensorboard runs by their max, min, mean, median and standard deviation. The aggregates are either saved in new tensorboard summaries or as .csv files.
I created TensorBoard Reducer to do this with PyTorch.
pip install tensorboard-reducer
tb-reducer runs/of-your-model* -o output-dir -r mean,std,min,max
The aggregation results can be saved to disk either as new TensorBoard logs or CSV / JSON / Excel files.
Related
How to assign more than one box to a job?
Boxes: B1, B2
Jobs: V, W, X, Y, Z
B1 has V, W, and Y running in same sequence.
B2 has W, X and Z running in same sequence.
So, how to put W in B1 and B2 both?
Assigning a single job to multiple box is NOT allowed,
Alternative,
Create two jobs of W such as JOB_W_1 and JOB_W_2 with different names but with the same command and place them in separate boxes
Incase you don't want both the job to run simultaneously, use the condition for JOB_W_1 as notrunning (JOB_W_2) or done (JOB_W_2)
Job Tree would look like
├── BOX_1
│ ├── JOB_V
│ ├── JOB_W_1
│ └── JOB_X
└── BOX_2
├── JOB_W_2
├── JOB_Y
└── JOB_Z
Downstream for any dependency use Success of:
both jobs JOB_W_1 and JOB_W_2 or
both boxes BOX_1 and BOX_2
I have a structure like the following one:
/src
__init__.py
module1.py
module2.py
/tests
__init__.py
test_module1.py
test_module2.py
/notebooks
__init__.py
exploring.ipynb
main.py
I'd like to use the notebook 'exploring' to do some data exploration, and to do so I'd need to perform relative imports of module1 and module2. But if I try to run from ..src.module1 import funct1, I receive an ImportError: attempted relative import with no known parent package, which I understand is expected because I'm running the notebook as if it was a script and not as a module.
So as a workaround I have been mainly pulling the notebook outside its folder to the main.py level every time I need to use it, and then from src.module1 import funct1 works.
I know there are tons of threads already on relative imports, but I couldn't find a simpler solution so far of making this work without having to move the notebook every time. Is there any way to perform this relative import, given that the notebook when called is running "as a script"?
Scripts cannot do relative imports. Have you considered something like:
if __name__ == "__main__":
sys.path.insert(0,
os.path.abspath(os.path.join(os.getcwd(), '..')))
from src.module1 import funct1
else:
from ..src.module1 import funct1
Or using exceptions:
try:
from ..src.module1 import funct1
except ImportError:
sys.path.insert(0,
os.path.abspath(os.path.join(os.getcwd(), '..')))
from src.module1 import funct1
?
My tensorflow version is 1.15, and my directory tree is depicted below.
root
|
|---scene1
| |
| |--img1.npy
| |--img2.npy
| |--cam.txt
| |--poses.txt
|
|---scene2
| |
| |--img1.npy
| |--img2.npy
| |--img3.npy
| |--cam.txt
| |--poses.txt
Each scene folder contains different number of images(in npy format), but exactly one cam.txt and one poses.txt. I have tried that using numpy.genfromtxt and numpy.load to read files in each scene folder into tensor, then using
ds = tf.data.Dataset.from_tensors to create dataset for each scene, finally using ds.concatenate to concatenate these datasets. This method works, but waste a lot of time when the number of scene folders goes huge. Is there any better way to handle the problem?
I recently ran into a similar problem.
When dealing with very large datasets, Python generators are a good option to go:
[Generators] are written like regular functions but use the yield statement whenever they want to return data. Each time next() is called on it, the generator resumes where it left off (it remembers all the data values and which statement was last executed).
Tensorflow's Dataset class supports them with the static from_generator method (also available with TF 1.15):
import pathlib
import numpy as np
import tensorflow as tf
def scene_generator():
base_dir = pathlib.Path('path/to/root/')
scenes = tf.data.Dataset.list_files(str(base_dir / '*'))
for s in scenes:
scene_dir = pathlib.Path(s.numpy().decode('utf-8'))
images = scene_dir / '*.npy'
data = []
for i in images:
with np.load(i) as image:
features = [image['x1'], image['x2']] # ...
data.append(features)
cam = np.genfromtxt(scene_dir / 'cam.txt')
poses = np.genfromtxt(scene_dir / 'poses.txt')
yield data, [cam, poses]
types = (tf.float32, tf.float32)
shapes = (tf.TensorShape([None]), tf.TensorShape([2, None]))
train_data = tf.data.Dataset.from_generator(scene_generator,
output_types=types,
output_shapes=shapes)
I'm about to do tensorflow serving.
pb file and variable folder are created.
but No file was created under the variable folder.
like this
└── variables
├── variables.data-00000-of-00001
└── variables.index
After further experimentation, I found that the file only occurs when output is output to tf.Variable.
for example
1) z = tf.Variable(3,dtype=tf.float32)
2) z = tf.constant(3,dtype=tf.float32)
1) is created the file but 2) is not created file
z is output variable
signature_def_map= {
"serving_default": tf.saved_model.signature_def_utils.predict_signature_def(
inputs= {"egg": x, "bacon":y},
outputs= {"spam": z})
})
Is it right that I found out?
The above explanation is a test result as a simple example.
This is what I really want to do
sIdSorted = tf.gather(sId, indices[::-1])[0:5]
sess=tf.Session()
print sess.run(sIdSorted,feed_dict={userLat:37.12,userLon:127.2})
As a result of printing, it was output as follows.
['s7' 's1' 's2' 's3' 's4']
However, in this way, nothing is displayed in the variable folder.....
So I tried to output to tf.variable.
sIdSorted = tf.Variable(tf.gather(sId, indices[::-1])[0:5])
but This will output an error to the following.
initial_value must have a shape specified: Tensor("strided_slice_1:0", dtype=string)
so I tried it as follows.
sIdSorted = tf.Variable(tf.constant(tf.gather(sId, indices[::-1])[0:5],shape=[5]))
but This will output an error to the following.
List of Tensors when single Tensor expected
I need your help. Thank you for reading.
**tensorflow version :1.3.0 python 2.x
That is correct: only tf.Variables result in variable files being exported. Those files contain the actual values of the variables. The graph structure itself is stored in the saved_model.pb. That's where your gather (and any other ops) are. You should be able to serve the model.
I'm wondering how I can perform the equivalent of git status with dulwich?
I tried this:
After adding/changing/renaming some files and staging them for commit, this is what I've tried doing:
from dulwich.repo import Repo
from dulwich.index import changes_from_tree
r = Repo('my-git-repo')
index = r.open_index()
changes = index.changes_from_tree(r.object_store, r['HEAD'].tree)
Outputs the following:
>>> list(changes)
(('Makefile', None), (33188, None), ('9b20...', None))
(('test/README.txt', 'test/README.txt'), (33188, 33188), ('484b...', '4f89...'))
((None, 'Makefile.mk'), (None, 33188), (None, '9b20...'))
((None, 'TEST.txt'), (None, 33188), (None, '2a02...'))
But this output requires that I further process it to detect:
I modified README.txt.
I renamed Makefile to Makefile.mk.
I added TEST.txt to the repository.
The functions in dulwich.diff_tree provide a much nicer interface to tree changes... is this not possible before actually committing?
You should be able to use dulwich.diff_tree.tree_changes to detect the changes between two trees.
One of the requirements for this is that you add the relevant tree objects to the object store - you can use dulwich.index.commit_index for this.
For completeness, a working sample:
from dulwich.repo import Repo
from dulwich.diff_tree import tree_changes
repo = Repo("./")
index = repo.open_index()
try:
head_tree = repo.head().tree
except KeyError: # in case of empty tree
head_tree = dulwich.objects.Tree()
changes = list(tree_changes(repo, head_tree, index.commit(repo.object_store)))
for change in changes:
print "%s: %s"%(change.type,change.new.path)