error in use of tf.app.flags - tensorflow

I used tf.app.flags in my tensorflow program like this:
flags = tf.app.flags
FLAGS = flags.FLAGS
flags.DEFINE_string('model_dir', './models','Save checkpoint')
.
.
.
if __name__ == "__main__":
# main()
tf.app.run()
But when run my code two time it makes this error:
ArgumentError: argument --model_dir: conflicting option string: --model_dir
I think tensorflow create a argument for --model_dir and when it run again it try to create again a argument for --model_dir, but conflicted by existence --model_dir.
are there any way two solve this problem or I used python parameters against tf.app.falgs?

My guess is that you are working in an environment like a Jupyter/iPython notebook.
The reason you are having this issue is that the flags data seems to be maintained within the Python session. tf.app.flags.FLAGS.__getattr__('model_dir') is equal to ./models even if you reset your variable FLAGS.
If you are using a notebook, I suggest you put your flag definitions in a separate cell. The only way that I found to reset tf.app.flags.FLAGS is to restart the kernel/session.

You can try like this:
#define flags
tf.flags.DEFINE_integer("age", 17, "age of user(default:20)")
tf.flags.DEFINE_boolean("drink_allow", False, "if can drink or not(default:False)")
tf.flags.DEFINE_float("weight", 55.55, "weight of user(default:55.55kg)")
FLAGS = tf.flags.FLAGS #init flags
FLAGS._parse_flags() # parse flags
for attr,value in FLAGS.__flags.items():
print("attr:%s\tvalue:%s" % (attr,str(value)))

Related

Airflow Xcom with SSHOperator

Im trying to get param from SSHOperator into Xcom and get it in python.
def decision_function(**context):
ti = context['ti']
output_ssh= ti.xcom_pull(task_ids='ssh_task')
print('ssh output is: {}'.format(output_ssh))
ls = SSHOperator(
task_id="ssh_task",
command= "ls",
ssh_hook = sshHook,
dag = mydag)
get_xcom = PythonOperator(
task_id='test',
python_callable=decision_function,
provide_context=True,
dag=mydag
ls >> get_xcom
I get the wrong result in the xcom_pull:
ssh output is: MTcySyBhaXJmbG93CTIuMUcgQXV0b0hpdHVtX05ld
Every thing is work fine with BashOperator but when I try to use the SSH, it not working currect.
I also try to change in the config the enable_xcom_pickling param, but still not working.
airflow: 2.1.4
linux
thank you.
The value of Xcom may be b64encoded depending on the value of enable_xcom_pickling as you can see in the source code
The issue you are facing has been discussed in PR which suggested to change functionality to be like other operators but the PR was not merged and you can see the reasons for it in the code review comments.
if you wish you can create custom version of the operator without the enable_xcom_pickling condition:
class MySSHOperator(SSHOperator):
def execute(self, context=None) -> Union[bytes, str]:
result: Union[bytes, str]
if self.command is None:
raise AirflowException("SSH operator error: SSH command not specified. Aborting.")
# Forcing get_pty to True if the command begins with "sudo".
self.get_pty = self.command.startswith('sudo') or self.get_pty
try:
with self.get_ssh_client() as ssh_client:
result = self.run_ssh_client_command(ssh_client, self.command)
except Exception as e:
raise AirflowException(f"SSH operator error: {str(e)}")
return result.decode('utf-8')

Argparser interactive

My question is pretty straight,
Is argaparse has an option to prompt missed arguments just via input?
script.py:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('mode', metavar='', help='Set count of likes', default=49)
parser.add_argument('-n', '--number', metavar='', help='Set count of likes', default=49, required=True)
parser.add_argument('-f', '--frequency', metavar='', help='Set chance to like/dislike', default=70)
# Running it via bash:
$ python script.py
# argparse automaticaly checking that missed argument "--number" and asking it via input built-in
>>> (argparse) Missed required argument "number", please specify it now, number:

stderr changes behavior of python's Popen when closing the script

I was using this script on python2 to launch an application and then immediately exit the python script without waiting for the child process to end. This was my code:
kwargs = {}
if platform.system() == 'Windows':
# from msdn [1]
CREATE_NEW_PROCESS_GROUP = 0x00000200 # note: could get it from subprocess
DETACHED_PROCESS = 0x00000008 # 0x8 | 0x200 == 0x208
kwargs.update(creationflags=DETACHED_PROCESS | CREATE_NEW_PROCESS_GROUP)
kwargs.update(stderr=subprocess.PIPE)
elif sys.version_info < (3, 2): # assume posix
kwargs.update(preexec_fn=os.setsid)
else: # Python 3.2+ and Unix
kwargs.update(start_new_session=True)
subprocess.Popen([APP], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE,**kwargs)
It works OK on python 2.7, but it doesn't start the expected 'APP' on python 3.7 without changes.
In order to make it work in python3, I found two independent workarounds:
Change stderr to this: stderr=subprocess.DEVNULL
or
Add a time.sleep(0.1) call after the Popen (before closing the script).
I assume this is not actually related to python, but to some event that needs to happen after the process gets opened before the python script can safely exit?
Any hints? I'd really like to know why it happens. Right now I simply added the sleep call.
Thank you

Perl 6 $*ARGFILES.handles in binary mode?

I'm trying out $*ARGFILES.handles and it seems that it opens the files in binary mode.
I'm writing a zip-merge program, that prints one line from each file until there are no more lines to read.
#! /usr/bin/env perl6
my #handles = $*ARGFILES.handles;
# say $_.encoding for #handles;
while #handles
{
my $handle = #handles.shift;
say $handle.get;
#handles.push($handle) unless $handle.eof;
}
I invoke it like this: zip-merge person-say3 repeat repeat2
It fails with: Cannot do 'get' on a handle in binary mode in block at ./zip-merge line 7
The specified files are text files (encoded in utf8), and I get the error message for non-executable files as well as executable ones (with perl6 code).
The commented out line says utf8 for every file I give it, so they should not be binary,
perl6 -v: This is Rakudo version 2018.10 built on MoarVM version 2018.10
Have I done something wrong, or have I uncovered an error?
The IO::Handle objects that .handles returns are closed.
my #*ARGS = 'test.p6';
my #handles = $*ARGFILES.handles;
for #handles { say $_ }
# IO::Handle<"test.p6".IO>(closed)
If you just want get your code to work, add the following line after assigning to #handles.
.open for #handles;
The reason for this is the iterator for .handles is written in terms of IO::CatHandle.next-handle which opens the current handle, and closes the previous handle.
The problem is that all of them get a chance to be both the current handle, and the previous handle before you get a chance to do any work on them.
(Perhaps .next-handle and/or .handles needs a :!close parameter.)
Assuming you want it to work like roundrobin I would actually write it more like this:
# /usr/bin/env perl6
use v6.d;
my #handles = $*ARGFILES.handles;
# a sequence of line sequences
my $line-seqs = #handles.map(*.open.lines);
# Seq.new(
# Seq.new( '# /usr/bin/env perl6', 'use v6.d' ), # first file
# Seq.new( 'foo', 'bar', 'baz' ), # second file
# )
for flat roundrobin $line-seqs {
.say
}
# `roundrobin` without `flat` would give the following result
# ('# /usr/bin/env perl6', 'foo'),
# ('use v6.d', 'bar'),
# ('baz')
If you used an array for $line-seqs, you will need to de-itemize (.<>) the values before passing them to roundrobin.
for flat roundrobin #line-seqs.map(*.<>) {
.say
}
Actually I personally would be more likely to write something similar to this (long) one-liner.
$*ARGFILES.handles.eagerĀ».openĀ».lines.&roundrobin.flat.map: *.put
:bin is always set in this type of objects. Since you are working on the handles, you should either read line by line as instructed on the example, or reset the handle so that it's not in binary mode.

How do I set a backend for django-celery. I set CELERY_RESULT_BACKEND, but it is not recognized

I set CELERY_RESULT_BACKEND = "amqp" in celeryconfig.py
but I get:
>>> from tasks import add
>>> result = add.delay(3,5)
>>> result.ready()
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 105, in ready
return self.state in self.backend.READY_STATES
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/result.py", line 184, in state
return self.backend.get_status(self.task_id)
File "/djangoprojects/venv/local/lib/python2.7/site-packages/celery/backends/base.py", line 414, in _is_disabled
raise NotImplementedError("No result backend configured. "
NotImplementedError: No result backend configured. Please see the documentation for more information.
I just went through this so I can shed some light on this. One might think for all of the great documentation stating some of this would have been a bit more obvious.
I'll assume you have both RabbitMQ up and functioning (it needs to be running), and that you have dj-celery installed.
Once you have that then all you need to do is to include this single line in your setting.py file.
BROKER_URL = "amqp://guest:guest#localhost:5672//"
Then you need to run syncdb and start this thing up using:
python manage.py celeryd -E -B --loglevel=info
The -E states that you want events captured and the -B states you want celerybeats running. The former enable you to actually see something in the admin window and the later allows you to schedule. Finally you need to ensure that you are actually going to capture the events and the status. So in another terminal run this:
./manage.py celerycam
And then finally your able to see the working example provided in the docs.. -- Again assuming you created the tasks.py that is says to.
>>> result = add.delay(4, 4)
>>> result.ready() # returns True if the task has finished processing.
False
>>> result.result # task is not ready, so no return value yet.
None
>>> result.get() # Waits until the task is done and returns the retval.
8
>>> result.result # direct access to result, doesn't re-raise errors.
8
>>> result.successful() # returns True if the task didn't end in failure.
True
Furthermore then you are able to view your status in the admin panel.
I hope this helps!! I would add one more thing which helped me. Watching the RabbitMQ Log file was key as it helped me identify that django-celery was actually talking to RabbitMQ.
Are you running django celery?
If so, you need to start a python shell in the context of django (or whatever the technical term is).
Type:
python manage.py shell
And try your commands from that shell
HI tried everything to work celery v3.1.25 with Django 1.8 version nothing worked..
Finally below line helped me ,feeling happy
app = Celery('documents',backend="celery.backends.amqp:AMQPBackend")
Setting backend="celery.backends.amqp:AMQPBackend" fixed my error.