Distributed Tensorflow Errors/ - tensorflow

When running a distributed tensorflow (TF v0.9.0rc0) set up, I start up 3 parameter servers and then 6 workers. The parameter servers seem to be fine, giving the message Started server with target: grpc://localhost:2222. But the workers give other errors (below) that I have questions about.
It seems to me that sometimes the computers aren't able to communicate with each other, thereby giving the socket error, connection refused errors. It also seems that the workers aren't able to find the parameter servers when initializing their variables and give the Cannot assign a device error.
Can anyone help me out in understanding what theses errors individually mean, how big of a deal each one is, and perhaps give me pointers in how to fix them if needed?
Specifically:
Why am I getting socket errors?
Why are there Master init: Unavailable issues / what do they mean?
How can I ensure that the devices requested are available?
Does this look like something I should post to the issues page of tensorflow's github account?
Notes on setup:
All computers report Tensorflow version: 0.9.0rc0 (python -c "import tensorflow as tf; print(tf.__version__);"),
although a few might have been installed from source instead of the pip packages if that matters.
All computers are on the same 1Gb ethernet switch.
Hardware is mostly the the same, with some workers running dual GPUs.
All of them give this error(ip addreses change):
E0719 12:06:17.711635677 2543 tcp_client_posix.c:173]
failed to connect to 'ipv4:192.168.xx.xx:2222': socket error: connection refused
But all of the non-chief workers also give:
E tensorflow/core/distributed_runtime/master.cc:202] Master init: Unavailable:
Additionally, some of the non-chief workers crash, giving this error:
Traceback (most recent call last):
File "main.py", line 219, in <module>
r.main()
File "main.py", line 119, in main
with sv.prepare_or_wait_for_session(server.target, config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/supervisor.py", line 691, in prepare_or_wait_for_sessionn max_wait_secs=max_wait_secs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/session_manager.py", line 282, in wait_for_session
sess.run([self._local_init_op])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 372, in run
run_metadata_ptr)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 636, in _run
feed_dict_string, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 708, in _do_run
target_list, options, run_metadata)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 728, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node 'save/restore_slice_23':
Could not satisfy explicit device specification '/job:ps/task:3/device:CPU:0'
because no devices matching that specification are registered in this process; available devices:
/job:ps/replica:0/task:0/cpu:0,
/job:ps/replica:0/task:1/cpu:0,
/job:ps/replica:0/task:2/cpu:0,
/job:ps/replica:0/task:4/cpu:0,
/job:worker/replica:0/task:0/cpu:0,
/job:worker/replica:0/task:0/gpu:0,
/job:worker/replica:0/task:1/cpu:0,
/job:worker/replica:0/task:1/gpu:0,
/job:worker/replica:0/task:2/cpu:0,
/job:worker/replica:0/task:2/gpu:0
[[Node: save/restore_slice_23 = RestoreSlice[dt=DT_FLOAT, preferred_shard=-1, _device="/job:ps/task:3/device:CPU:0"](save/Const, save/restore_slice_23/tensor_name, save/restore_slice_23/shape_and_slice)]]
Caused by op u'save/restore_slice_23', defined at:
File "main.py", line 219, in <module>
r.main()
File "main.py", line 101, in main
saver = tf.train.Saver()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 845, in __init__
restore_sequentially=restore_sequentially)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 515, in build
filename_tensor, vars_to_save, restore_sequentially, reshape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 271, in _AddRestoreOps
values = self.restore_op(filename_tensor, vs, preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/saver.py", line 186, in restore_op
preferred_shard=preferred_shard)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/io_ops.py", line 202, in _restore_slice
preferred_shard, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_io_ops.py", line 358, in _restore_slice
preferred_shard=preferred_shard, name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/op_def_library.py", line 704, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2260, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1230, in __init__
self._traceback = _extract_stack()

I figured out what my problem was.
TL;DR: The chief needs to know about all the variables in order to initialize them all. Non-chief workers can't create their own variables.
I was converting an old program where all workers had a few independent variables, but needed to share some variables (I was using ZMQ to pass these) to a distributed TensorFlow setup, and forgot to initialize all of the variables on all of the workers. I had something like
# Create worker specific variable
with tf.variable_scope("world_{}".format(**worker_id**)):
w1 = tf.get_variable("weight", shape=(input_dim, hidden_dim), dtype=tf.float32, initializer=tf.truncated_normal_initializer())
instead of doing something like this:
# Create all worker specific variables
all_w1 = {}
for worker in worker_cnt:
with tf.variable_scope("world_{}".format(**worker_id**)):
all_w1[worker] = tf.get_variable("weight", shape=(input_dim, hidden_dim), dtype=tf.float32, initializer=tf.truncated_normal_initializer())
# grab worker specific variable
w1 = all_w1[**worker_id**]
As for the errors...
I suspect that this caused some workers to die with the Master init: Unavailable: error message above because the chief never knew about the variables the workers wanted to create.
I don't have a solid explanation for why the devices unavailable (3rd) error didn't find that device, but I think it's again, because only the master could create that, and he didn't know about the new variables.
The 1st error seems to be because the computers weren't ready to talk after their failures, as I haven't seen that error after making the fixes. I still see it if I kill a worker and start him up again, but it doesn't seem to be an issue if they all start up together.
Anyway, I hope that's helpful if anyone ever has the same error later on.

Related

How to resolve UnicodeError in Tensorflow 2 Object Detection API

I have a question, but when I was training the tensorflow-object-detection-API, I got the following error. Can you tell me if there is any workaround?
Conducted commnand
python model_main_tf2.py --model_dir=models/my_ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8 --pipeline_config_path=models/my_ssd_mobilenet_v1_fpn_640x640_coco17_tpu-8/pipeline.config
erroer messege
File "model_main_tf2.py", line 115, in <module>
tf.compat.v1.app.run()
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 40, in ru
n
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\absl\app.py", line 303, in run
_run_main(main, args)
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "model_main_tf2.py", line 106, in main
model_lib_v2.train_loop(
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\object_detection\model_lib_v2.py", line 611, in tr
ain_loop
manager = tf.compat.v2.train.CheckpointManager(
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\training\checkpoint_management.p
y", line 640, in __init__
recovered_state = get_checkpoint_state(directory)
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\training\checkpoint_management.p
y", line 278, in get_checkpoint_state
file_content = file_io.read_file_to_string(
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 352, in
read_file_to_string
return f.read()
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 117, in
read
self._preread_check()
File "C:\Users\rh731\.virtualenvs\Tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 79, in
_preread_check
self._read_buf = _pywrap_file_io.BufferedInputStream(
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x83 in position 108: invalid start byte
What I did
-I tried to convert the character code of pipeline.config.
-The API was tested. (It's OK like the attached image.)
-Check if there are any mistakes in the execution command.
Also, when learning on another network, I was able to finish learning to the end without such an error. This time as well, I downloaded and ran the trained model.
Reference site:
·tutorial
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#training-the-model
・ List of trained models https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md
We apologize for the inconvenience, but thank you for your cooperation.
Most Probably it's because you are trying to run a TPU model on your local machine(I guessed that from you PyCharm ScreenShot). Try running a GPU based model or a CPU one.

Solving MINLP with PYOMO/PYSP

team,
currently I am working on a nonlinear stochastic optimization problem. So far, the toolbox has been really helpful, thank you! However, adding a nonlinear constraint has caused an error. I use the gurobi solver. The problem results from the following constraint.
def max_pcr_power_rule(model, t):
if t == 0:
return 0 <= battery.P_bat_max-model.P_sc_max[t+1]-model.P_pcr
else:
return model.P_trade_c[t+1] + np.sqrt(-2*np.log(rob_opt.max_vio)) \
*sum(model.U_max_pow[t,i]**2 for i in set_sim.tme_dat_stp)**(0.5) \
<= battery.P_bat_max-model.P_sc_max[t+1]-model.P_pcr
model.max_pcr_power = Constraint(set_sim.tme_dat_stp, rule=max_pcr_power_rule)
I receive this error message:
Initializing extensive form algorithm for stochastic programming
problems. Exception encountered. Scenario tree manager attempting to
shut down. Traceback (most recent call last): File
"C:\Users\theil\Anaconda3\Scripts\runef-script.py", line 5, in
sys.exit(pyomo.pysp.ef_writer_script.main()) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\ef_writer_script.py",
line 863, in main
traceback=options.traceback) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\util\misc.py",
line 344, in launch_command
rc = command(options, *cmd_args, **cmd_kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\ef_writer_script.py",
line 748, in runef
ef.solve() File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\ef_writer_script.py",
line 430, in solve
**solve_kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\parallel\manager.py",
line 122, in queue
return self._perform_queue(ah, *args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\parallel\local.py",
line 59, in _perform_queue
results = opt.solve(*args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\base\solvers.py",
line 599, in solve
self._presolve(*args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\solvers\plugins\solvers\GUROBI.py",
line 224, in _presolve
ILMLicensedSystemCallSolver._presolve(self, *args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\solver\shellcmd.py",
line 196, in _presolve
OptSolver._presolve(self, *args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\base\solvers.py",
line 696, in _presolve
**kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\base\solvers.py",
line 767, in _convert_problem
**kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\base\convert.py",
line 110, in convert_problem
problem_files, symbol_map = converter.apply(*tmp, **tmpkw) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\solvers\plugins\converter\model.py",
line 96, in apply
io_options=io_options) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\core\base\block.py",
line 1681, in write
io_options) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\repn\plugins\cpxlp.py",
line 176, in call
include_all_variable_bounds=include_all_variable_bounds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\repn\plugins\cpxlp.py",
line 719, in _print_model_LP
"with nonlinear terms." % (constraint_data.name)) ValueError: Cannot write legal LP file. Constraint '1.max_pcr_power[1]' has a
body with nonlinear terms.
I thought, that the problem may lay within the nested formulation of the constraint, i.e. the combination of sum and exponential terms. Therefore, I put the sum()-term into a separate variable. This didn't change the core the characteristic of the nonlinear constraint, so that the error stayed the same. My other suspicion was, that the problem lays within the gurobi solver. So i tried to utilize ipopt, which produced the follwing error message:
Error evaluating constraint 1: can't evaluate pow'(0,0.5). ERROR:
Solver (ipopt) returned non-zero return code (1) ERROR: See the solver
log above for diagnostic information. Exception encountered. Scenario
tree manager attempting to shut down. Traceback (most recent call
last): File "C:\Users\theil\Anaconda3\Scripts\runef-script.py", line
5, in
sys.exit(pyomo.pysp.ef_writer_script.main()) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\ef_writer_script.py",
line 863, in main
traceback=options.traceback) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\util\misc.py",
line 344, in launch_command
rc = command(options, *cmd_args, **cmd_kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\ef_writer_script.py",
line 748, in runef
ef.solve() File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\pysp\ef_writer_script.py",
line 434, in solve
**solve_kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\parallel\manager.py",
line 122, in queue
return self._perform_queue(ah, *args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\parallel\local.py",
line 59, in _perform_queue
results = opt.solve(*args, **kwds) File "C:\Users\theil\Anaconda3\lib\site-packages\pyomo\opt\base\solvers.py",
line 626, in solve
"Solver (%s) did not exit normally" % self.name) pyutilib.common._exceptions.ApplicationError: Solver (ipopt) did not
exit normally
I am wondering now, if my mistake lays within the formulation of the constraint or the way i utilize the solver. Otherwise I have to simplify my problem to make it solvable.
I would be glad, if you can point me in the right direction. Thank you!
Best regards
Philipp
As Erwin mentioned in the comment, Gurobi is generally not intended for nonlinear problems.

ML Engine BigQuery: Request had insufficient authentication scopes

I'm running a tensorflow model submitting the training on ml engine. I have built a pipeline which reads from BigQuery using tf.contrib.cloud.python.ops.bigquery_reader_ops.BigQueryReader as a reader for the queue.
Everything works fine using DataLab and in local, setting the GOOGLE_APPLICATION_CREDENTIALS variable pointing to the json file for the credentials key. However, when I submit the training job in the cloud I get these errors (I just post the main two):
Permission denied: Error executing an HTTP request (HTTP response code 403, error code 0, error message '') when reading schema for...
There was an error creating the model. Check the details: Request had insufficient authentication scopes.
I've already checked everything else like correctly defining the table schema in the script and project/dataset/table ids/names
I paste down here the whole error present in the log for more clarity:
message: "Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 174, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/root/.local/lib/python2.7/site-packages/trainer/task.py", line 131, in <module>
hparams=hparam.HParams(**args.__dict__)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 210, in run
return _execute_schedule(experiment, schedule)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/learn_runner.py", line 47, in _execute_schedule
return task()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 495, in train_and_evaluate
self.train(delay_secs=0)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 275, in train
hooks=self._train_monitors + extra_hooks)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/experiment.py", line 665, in _call_train
monitors=hooks)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/util/deprecation.py", line 289, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 455, in fit
loss = self._train_model(input_fn=input_fn, hooks=hooks)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py", line 1007, in _train_model
_, loss = mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 521, in __exit__
self._close_internal(exception_type)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 556, in _close_internal
self._sess.close()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 791, in close
self._sess.close()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/monitored_session.py", line 888, in close
ignore_live_threads=True)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/coordinator.py", line 389, in join
six.reraise(*self._exc_info_to_raise)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/queue_runner_impl.py", line 238, in _run
enqueue_callable()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/client/session.py", line 1063, in _single_operation_run
target_list_as_strings, status, None)
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/errors_impl.py", line 466, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
PermissionDeniedError: Error executing an HTTP request (HTTP response code 403, error code 0, error message '')
when reading schema for pasquinelli-bigdata:Transactions.t_11_Hotel_25_w_train#1505224768418
[[Node: GenerateBigQueryReaderPartitions = GenerateBigQueryReaderPartitions[columns=["F_RACC_GEST", "LABEL", "F_RCA", "W24", "ETA", "W22", "W23", "W20", "W21", "F_LEASING", "W2", "W16", "WLABEL", "SEX", "F_PIVA", "F_MUTUO", "Id_client", "F_ASS_VITA", "F_ASS_DANNI", "W19", "W18", "W17", "PROV", "W15", "W14", "W13", "W12", "W11", "W10", "W7", "W6", "W5", "W4", "W3", "F_FIN", "W1", "ImpTot", "F_MULTIB", "W9", "W8"], dataset_id="Transactions", num_partitions=1, project_id="pasquinelli-bigdata", table_id="t_11_Hotel_25_w_train", test_end_point="", timestamp_millis=1505224768418, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Any suggestion would be extremely helpful since I'm relatively new with GC.
Thank you all.
Support for reading BigQuery data from Cloud ML Engine is still under development, so what you are doing is currently unsupported. The issue you are hitting is the machines that ML Engine runs do not have the right scopes to talk to BigQuery. A potential issue you may also encounter running locally is poor performance reading from BigQuery. These are two examples of work that needs to be addressed.
In the meantime, I recommend exporting data to GCS for training. This is going to be much more scalable so you don't have to worry about poor training performance as your data increases. This can be a good pattern as well as it will let you preprocess your data once, write the result to GCS in CSV format, and then do multiple training runs to try out different algorithms or hyperparameters.

Using graph_metrics.py with a saved graph

I want to view statistics of my model by saving my graph to a file then running graph_metrics.py.
I have tried a few different things to write the file, my best effort is:
tf.train.write_graph( session.graph_def, ".", "my_graph", as_text=True )
But here's what happens:
$ python ./util/graph_metrics.py --noinput_binary --graph my_graph
Traceback (most recent call last):
File "./util/graph_metrics.py", line 137, in <module>
tf.app.run()
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "./util/graph_metrics.py", line 85, in main
FLAGS.batch_size)
File "./util/graph_metrics.py", line 109, in calculate_graph_metrics
input_tensor = sess.graph.get_tensor_by_name(input_layer)
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2531, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2385, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File ".virtualenv/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2427, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'Mul:0' refers to a Tensor which does not exist. The operation, 'Mul', does not exist in the graph."
Is there a complete working example of saving a graph, then analyzing it with graph_metrics.py?
This process seems to involve a magic incantation that I haven't yet discovered.
The error you're hitting is because you need to specify the name of your own input node with --input_layer= (it just defaults to Mul:0 because that's what we use in one of our Inception models):
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/graph_metrics.py#L51
The graph_metrics script is still very much a work in progress unfortunately, and you may hit problems with shape inference, but hopefully this should get you past the initial hurdle.

iPython won't start anymore after using os.dup2()

I was just trying out the os.dup2() function to redirect outputs, when I was typing in os.dup2(3,1), which my ipython (2.7) didn't seem to like.
It crashed and now it won't start again, yielding the error:
Traceback (most recent call last):
File "/usr/bin/ipython", line 8, in <module>
launch_new_instance()
File "/usr/lib/python2.7/dist-packages/IPython/frontend/terminal/ipapp.py", line 402, in launch_new_instance
app.initialize()
File "<string>", line 2, in initialize
File "/usr/lib/python2.7/dist-packages/IPython/config/application.py", line 84, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/frontend/terminal/ipapp.py", line 312, in initialize
self.init_shell()
File "/usr/lib/python2.7/dist-packages/IPython/frontend/terminal/ipapp.py", line 332, in init_shell
ipython_dir=self.ipython_dir)
File "/usr/lib/python2.7/dist-packages/IPython/config/configurable.py", line 318, in instance
inst = cls(*args, **kwargs)
File "/usr/lib/python2.7/dist-packages/IPython/frontend/terminal/interactiveshell.py", line 183, in __init__
user_module=user_module, custom_exceptions=custom_exceptions
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 456, in __init__
self.init_readline()
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 1777, in init_readline
self.refill_readline_hist()
File "/usr/lib/python2.7/dist-packages/IPython/core/interactiveshell.py", line 1789, in refill_readline_hist
include_latest=True):
File "/usr/lib/python2.7/dist-packages/IPython/core/history.py", line 256, in get_tail
return reversed(list(cur))
DatabaseError: database disk image is malformed
If you suspect this is an IPython bug, please report it at:
https://github.com/ipython/ipython/issues
or send an email to the mailing list at ipython-dev#scipy.org
You can print a more detailed traceback right now with "%tb", or use "%debug"
to interactively debug it.
Extra-detailed tracebacks for bug-reporting purposes can be enabled via:
c.Application.verbose_crash=True
can anyone help me with that?
Reposting as an answer:
That looks like fd 3 is your IPython history database, and you redirected stdout to it and corrupted it.
To get it to start again, remove or rename ~/.ipython/profile_default/history.sqlite (or ~/.config/ipython/profile_default/history.sqlite on certain IPython versions on Linux).