tensorflow 2.0 keras save model to hdfs: Can't decrement id ref count - tensorflow

I have mounted an hdfs drive by hdfs fuse, thus I can access the hdfs by path /hdfs/xxx.
After training a model by keras, I want to save it to /hdfs/model.h5 by model.save("/hdfs/model.h5").
I get the following error:
2020-02-26T10:06:51.83869705Z File "h5py/_objects.pyx", line 193, in h5py._objects.ObjectID.__dealloc__
2020-02-26T10:06:51.838791107Z RuntimeError: Can't decrement id ref count (file write failed: time = Wed Feb 26 10:06:51 2020
2020-02-26T10:06:51.838796288Z , filename = '/hdfs/model.h5', file descriptor = 3, errno = 95, error message = 'Operation not supported', buf = 0x7f20d000ddc8, total write size = 512, bytes this sub-write = 512, bytes actually written = 18446744073709551615, offset = 298264)
2020-02-26T10:06:51.838802442Z Exception ignored in: 'h5py._objects.ObjectID.__dealloc__'
2020-02-26T10:06:51.838807122Z Traceback (most recent call last):
2020-02-26T10:06:51.838811833Z File "h5py/_objects.pyx", line 193, in h5py._objects.ObjectID.__dealloc__
2020-02-26T10:06:51.838816793Z RuntimeError: Can't decrement id ref count (file write failed: time = Wed Feb 26 10:06:51 2020
2020-02-26T10:06:51.838821942Z , filename = '/hdfs/model.h5', file descriptor = 3, errno = 95, error message = 'Operation not supported', buf = 0x7f20d000ddc8, total write size = 512, bytes this sub-write = 512, bytes actually written = 18446744073709551615, offset = 298264)
2020-02-26T10:06:51.838827917Z Traceback (most recent call last):
2020-02-26T10:06:51.838832755Z File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 117, in save_model_to_hdf5
2020-02-26T10:06:51.838838098Z f.flush()
2020-02-26T10:06:51.83885453Z File "/usr/local/lib/python3.6/dist-packages/h5py/_hl/files.py", line 452, in flush
2020-02-26T10:06:51.838859816Z h5f.flush(self.id)
2020-02-26T10:06:51.838864401Z File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
2020-02-26T10:06:51.838869302Z File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
2020-02-26T10:06:51.838874126Z File "h5py/h5f.pyx", line 146, in h5py.h5f.flush
2020-02-26T10:06:51.838879016Z RuntimeError: Can't flush cache (file write failed: time = Wed Feb 26 10:06:51 2020
2020-02-26T10:06:51.838885827Z , filename = '/hdfs/model.h5', file descriptor = 3, errno = 95, error message = 'Operation not supported', buf = 0x4e5b018, total write size = 4, bytes this sub-write = 4, bytes actually written = 18446744073709551615, offset = 34552)
But I can directly write a file to the same path by
with open("/hdfs/a.txt") as f:
f.write("1")
Also I've figured out a tricky workaround and it worked...
model.save("temp.h5")
move("temp.h5", "/hdfs/model.h5")
So maybe the problem is about keras api? It can only save the model locally but cannot save to an hdfs path.
Any idea how to fix the problem?

I don't think tensorflow makes any promises about being able to save to hdfs-fuse. Your (final) error is "Can't flush cache" not, "Can't decrement id ref count", basically meaning "Can't save straight to hdfs-fuse". But, to be honest, it seems fixed to me, your workaround is fine.

Related

Data Length Error when Merging PDFs with PyPDF2

I am starting a project that will take specific pages out of each PDF in a folder and merge those pages into a single file. I am getting the error below when building the quoted code about the length of the encryption, and I don't know where I would need to address that.
from PyPDF2 import PdfFileMerger
import glob
files = glob.glob('C:/Users/Jake/Documents/UPLOAD/test_merge/*.pdf')
merger = PdfFileMerger()
for file in files:
merger.append(file)
merger.write("merged.pdf")
merger.close()
ERROR
Traceback (most recent call last):
File "C:\Users\Jake\Documents\Work Projects\Python\Contract Merger\Merger .02", line 10, in <module>
merger.write("merged.pdf")
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_merger.py", line 312, in write
my_file, ret_fileobj = self.output.write(fileobj)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 838, in write
self.write_stream(stream)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 811, in write_stream
self._sweep_indirect_references(self._root)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 960, in _sweep_indirect_references
data = self._resolve_indirect_object(data)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 1005, in _resolve_indirect_object
real_obj = data.pdf.get_object(data)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_reader.py", line 1187, in get_object
retval = self._encryption.decrypt_object(
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 747, in decrypt_object
return cf.decrypt_object(obj)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 185, in decrypt_object
obj[dictkey] = self.decrypt_object(value)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 179, in decrypt_object
data = self.strCrypt.decrypt(obj.original_bytes)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 87, in decrypt
d = aes.decrypt(data)
File "C:\Users\Jake\Anaconda3\lib\site-packages\Crypto\Cipher\_mode_cbc.py", line 246, in decrypt
raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size)
ValueError: Data must be padded to 16 byte boundary in CBC mode
[Finished in 393ms]
I wrote a basic program from a YouTube video and tried to run it, but I got the error that PyCryptodome was a dependent for PyPDF2. After installing that, I am getting an error about the data length for encryption when writing the pdf. Googling that error lead me to this solution. I am a bit of a novice, and I don't really understand why any kind of encryption is being applied in the first place, other than what I assume is necessary for the pdf reader/writer to operate, so I don't know where I would need to apply that solution in this code.
After writing up this question, I was lead to this solution, which I tried to run the code below, I received the same error.
from PyPDF2 import PdfFileMerger, PdfFileReader
import glob
merger = PdfFileMerger()
files = glob.glob('C:/Users/Jake/Documents/UPLOAD/test_merge/*.pdf')
for filename in files:
with open(filename, 'rb') as source:
tmp = PdfFileReader(source)
merger.append(tmp)
merger.write('Result.pdf')
ERROR
Traceback (most recent call last):
File "C:\Users\Jake\Documents\Work Projects\Python\Contract Merger\Merger .03.py", line 13, in <module>
merger.write('Result.pdf')
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_merger.py", line 312, in write
my_file, ret_fileobj = self.output.write(fileobj)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 838, in write
self.write_stream(stream)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 811, in write_stream
self._sweep_indirect_references(self._root)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 960, in _sweep_indirect_references
data = self._resolve_indirect_object(data)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_writer.py", line 1005, in _resolve_indirect_object
real_obj = data.pdf.get_object(data)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_reader.py", line 1187, in get_object
retval = self._encryption.decrypt_object(
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 747, in decrypt_object
return cf.decrypt_object(obj)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 185, in decrypt_object
obj[dictkey] = self.decrypt_object(value)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 179, in decrypt_object
data = self.strCrypt.decrypt(obj.original_bytes)
File "C:\Users\Jake\Anaconda3\lib\site-packages\PyPDF2\_encryption.py", line 87, in decrypt
d = aes.decrypt(data)
File "C:\Users\Jake\Anaconda3\lib\site-packages\Crypto\Cipher\_mode_cbc.py", line 246, in decrypt
raise ValueError("Data must be padded to %d byte boundary in CBC mode" % self.block_size)
ValueError: Data must be padded to 16 byte boundary in CBC mode
[Finished in 268ms]
My thinking is that something else has gone wrong, but I am at a loss at to what that could be.
What have I done wrong with this build to get this error, and how can I correct it?
Turns out this is an issue with PyPDF2. There is a 3-line fix that can be injected to correct the error if you attempt this before it is patched.

datasets = tfds.load(name="mnist") comes with error "Expected binary or unicode string, got WindowsGPath......"

I have some difficulty with tensorflow_datasets when I was trying to load mnist.
python:3.7
tensorflow : 2.1.0
tensorflow_datasets has been upgraded to latest version 4.6, because the default version of tensorflow_datasets from tensorflow installation has no attribute 'load'
But now the problem is data can not be downloaded and extracted successfully.
with the following command:
datasets = tfds.load(name="mnist")
the error message is :
Downloading and preparing dataset Unknown size (download: Unknown size, generated: Unknown size, total: Unknown size) to ~\tensorflow_datasets\mnist\3.0.1...
Extraction completed...: 0 file [00:00, ? file/s]██████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 138.37 url/s]
Dl Size...: 100%|██████████████████████████████████████████████████████████████████████████| 11594722/11594722 [00:00<00:00, 373172106.07 MiB/s]
Dl Completed...: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 122.03 url/s]
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\core\load.py", line 327, in load
dbuilder.download_and_prepare(**download_and_prepare_kwargs)
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 483, in download_and_prepare
download_config=download_config,
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\core\dataset_builder.py", line 1222, in _download_and_prepare
disable_shuffling=self.info.disable_shuffling,
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\core\split_builder.py", line 310, in submit_split_generation
return self._build_from_generator(**build_kwargs)
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\core\split_builder.py", line 376, in _build_from_generator
leave=False,
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tqdm\std.py", line 1195, in iter
for obj in iterable:
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\image_classification\mnist.py", line 151, in _generate_examples
images = _extract_mnist_images(data_path, num_examples)
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_datasets\image_classification\mnist.py", line 350, in _extract_mnist_images
f.read(16) # header
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 122, in read
self._preread_check()
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_core\python\lib\io\file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
File "C:\Users\Wilso\Anaconda3\envs\tfgpu\lib\site-packages\tensorflow_core\python\util\compat.py", line 87, in as_bytes
(bytes_or_text,))
TypeError: Expected binary or unicode string, got WindowsGPath('C:\Users\Wilso\tensorflow_datasets\downloads\extracted\GZIP.cvdf-datasets_mnist_train-images-idx3-ubyteRA_Kv3PMVG-iFHXoHqNwJlYF9WviEKQCTSyo8gNSNgk.gz')
Try:
(ds_train, ds_test), ds_info = tfds.load(
"mnist",
split=["train", "test"],
shuffle_files=True,
as_supervised=True, # will return tuple (img, label) otherwise dict
with_info=True, # able to get info about dataset
)

Running into uvloop issues with Database queries from Rasa-X?

I'm trying to make a simple query to my amazon neptune database, from Rasa-x.
Here is the code from my actions.py:
class ActionQueryDietary(Action):
def name(self) -> Text:
return "action_query_dietary"
def run(self, dispatcher: CollectingDispatcher, tracker: Tracker, domain: Dict[Text, Any]) -> List[Dict[Text, Any]]:
available = False
restaurant = "XXXX"
dietaryQuestion=tracker.get_slot('dietaryQuestion')
g, remoteConn = kb.openConnection()
dietary = kb.getDietary(g, restaurant, dietaryQuestion)
if(dietary=="Yes"):
available = True
if(available==True):
dispatcher.utter_message(text="According to our knowledge base, {} is on the menu").format(dietaryQuestion)
else:
dispatcher.utter_message(text="Sorry, according to our knowledge base, we don't have this option on the menu")
kb.closeConnection(remoteConn)
return []
and here is the code from knowledgebase.py:
def getDietary(g, restaurant, dietary):
properties = queryRestaurantProperties(g, restaurant)
result = properties[dietary]
print(result)
return result
but any query to the knowledgebase results in this error:
2020-10-22T18:01:22.347345241Z File "/opt/venv/lib/python3.7/site-packages/sanic/app.py", line 973, in handle_request
2020-10-22T18:01:22.347351643Z response = await response
2020-10-22T18:01:22.347357446Z File "/app/rasa_sdk/endpoint.py", line 102, in webhook
2020-10-22T18:01:22.347363522Z result = await executor.run(action_call)
2020-10-22T18:01:22.347369398Z File "/app/rasa_sdk/executor.py", line 392, in run
2020-10-22T18:01:22.347375473Z events = action(dispatcher, tracker, domain)
2020-10-22T18:01:22.347381348Z File "/app/actions/actions.py", line 33, in run
2020-10-22T18:01:22.347387063Z dietary = kb.getDietary(g, restaurant, dietaryQuestion)
2020-10-22T18:01:22.347392835Z File "/app/actions/knowledgebase.py", line 117, in getDietary
2020-10-22T18:01:22.347399112Z properties = queryRestaurantProperties(g, restaurant)
2020-10-22T18:01:22.347405111Z File "/app/actions/knowledgebase.py", line 86, in queryRestaurantProperties
2020-10-22T18:01:22.347411869Z result = g.V().has('name', restaurant).valueMap().next()
2020-10-22T18:01:22.347418048Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/process/traversal.py", line 89, in next
2020-10-22T18:01:22.347424293Z return self.__next__()
2020-10-22T18:01:22.347430069Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/process/traversal.py", line 48, in __next__
2020-10-22T18:01:22.347436333Z self.traversal_strategies.apply_strategies(self)
2020-10-22T18:01:22.347441940Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/process/traversal.py", line 573, in apply_strategies
2020-10-22T18:01:22.347447667Z traversal_strategy.apply(traversal)
2020-10-22T18:01:22.347453031Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/remote_connection.py", line 149, in apply
2020-10-22T18:01:22.347459352Z remote_traversal = self.remote_connection.submit(traversal.bytecode)
2020-10-22T18:01:22.347465069Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/driver_remote_connection.py", line 55, in submit
2020-10-22T18:01:22.347486749Z result_set = self._client.submit(bytecode)
2020-10-22T18:01:22.347493788Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/client.py", line 127, in submit
2020-10-22T18:01:22.347499424Z return self.submitAsync(message, bindings=bindings).result()
2020-10-22T18:01:22.347505093Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/client.py", line 146, in submitAsync
2020-10-22T18:01:22.347511092Z return conn.write(message)
2020-10-22T18:01:22.347516610Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/connection.py", line 55, in write
2020-10-22T18:01:22.347522673Z self.connect()
2020-10-22T18:01:22.347529987Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/connection.py", line 45, in connect
2020-10-22T18:01:22.347536097Z self._transport.connect(self._url, self._headers)
2020-10-22T18:01:22.347542222Z File "/opt/venv/lib/python3.7/site-packages/gremlin_python/driver/tornado/transport.py", line 36, in connect
2020-10-22T18:01:22.347547822Z lambda: websocket.websocket_connect(url))
2020-10-22T18:01:22.347553477Z File "/opt/venv/lib/python3.7/site-packages/tornado/ioloop.py", line 571, in run_sync
2020-10-22T18:01:22.347559295Z self.start()
2020-10-22T18:01:22.347564864Z File "/opt/venv/lib/python3.7/site-packages/tornado/platform/asyncio.py", line 132, in start
2020-10-22T18:01:22.347570978Z self.asyncio_loop.run_forever()
2020-10-22T18:01:22.347576693Z File "uvloop/loop.pyx", line 1351, in uvloop.loop.Loop.run_forever
2020-10-22T18:01:22.347582342Z File "uvloop/loop.pyx", line 484, in uvloop.loop.Loop._run
2020-10-22T18:01:22.347588222Z RuntimeError: Cannot run the event loop while another loop is running
I tried using nest_asyncio.apply, but that resulted in this error:
ValueError: Can't patch loop of type <class 'uvloop.Loop'>
which according to the docs is just a rule.
So I don't really know how to proceed?
Adding my comment above as an answer. In some cases it is necessary to downlevel the version of Tornado being used. There are some issues that you can sometimes run into if the event loop is already running when someone else tries to create it. For now, down leveling to Tornado 4.5.1 with Gremlin Python should resolve any issues.

send file to serial port in python

i am trying to send the file below '105.8k' to my energy meter.
i am using the xmodem example from pypi but i get the following error:
Traceback (most recent call last):
File "C:\Py\mainpy.py", line 68, in <module>
status = modem.send(f, retry=3)
File "C:\Users\admin\AppData\Local\Programs\Python\Python38-32\lib\site-packages\xmodem\__init__.py", line 270, in send
char = self.getc(1)
File "C:\py\mainpy.py", line 62, in getc
return ser.read(size) or None
AttributeError: 'str' object has no attribute 'read'
the code i use:
### send file to port###
ser = serialPortCombobox.get().split(" ")[0]
def getc(size, timeout=1):
return ser.read(size) or None
def putc(data, timeout=1):
return ser.write(data)
modem = XMODEM(getc, putc)
f = open('105.8k', 'rb')
status = modem.send(f, retry=3)
ser.close()
stream.close()
thank you for your help.

APScheduler Look up object error

APScheduler (3.3.1) py2.7
I use this code to do my job When I use Memory as job store it can work well but I have too many job and my Memory in server is limit so I change SQLAlchemyJobStore as job store but I got the Lookup error. How to solve it.
Code:
def script(indicator, strategy_name, real_time=False):
# Solve No handlers could be found for logger “apscheduler.scheduler
import logging
logging.basicConfig(level=logging.ERROR,
format='%(name)-12s %(asctime)s %(levelname)-8s %(message)s',
datefmt='%a, %d %b %Y %H:%M:%S')
try:
job_defaults = {
'coalesce': False,
'max_instances': 1,
"misfire_grace_time": config.real_time_script_interval + 5,
}
executors = {
'default': ThreadPoolExecutor(60),
}
jobstores = {
"default": SQLAlchemyJobStore(url='sqlite:///jobs.sqlite')
}
scheduler = BlockingScheduler(daemonic=True, jobstores=jobstores, job_defaults=job_defaults,
executors=executors)
module = __import__("%s.%s" % (indicator, strategy_name), fromlist=[strategy_name])
if real_time:
for st in module.strategy:
scheduler.add_job(st.run, "interval", seconds=config.real_time_script_interval)
else:
for st in module.strategy:
# 计算最近的下一个准点时间
start_time = _recent_time(st.run_period)
scheduler.add_job(st.run, "interval", **start_time)
scheduler.start()
except Exception as e:
logger.get("run-log").error(error_msg())
Error:
apscheduler.jobstores.default Thu, 06 Apr 2017 10:50:32 ERROR Unable to restore job "d100a4b24e2d49c3ad51305fd846e5f5" -- removing it
Traceback (most recent call last):
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 135, in _get_jobs
jobs.append(self._reconstitute_job(row.job_state))
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 122, in _reconstitute_job
job.__setstate__(job_state)
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/job.py", line 260, in __setstate__
self.func = ref_to_obj(self.func_ref)
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/util.py", line 277, in ref_to_obj
raise LookupError('Error resolving reference %s: error looking up object' % ref)
LookupError: Error resolving reference base.strategy:Strategy.run: error looking up object
apscheduler.jobstores.default Thu, 06 Apr 2017 10:50:32 ERROR Unable to restore job "2602167cd3c745c2b0764a2b63da1a3a" -- removing it
Traceback (most recent call last):
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 135, in _get_jobs
jobs.append(self._reconstitute_job(row.job_state))
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 122, in _reconstitute_job
job.__setstate__(job_state)
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/job.py", line 260, in __setstate__
self.func = ref_to_obj(self.func_ref)
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/util.py", line 277, in ref_to_obj
raise LookupError('Error resolving reference %s: error looking up object' % ref)
LookupError: Error resolving reference base.strategy:Strategy.run: error looking up object
apscheduler.jobstores.default Thu, 06 Apr 2017 10:50:32 ERROR Unable to restore job "3eb917670e7642b8848a165268df8913" -- removing it
Traceback (most recent call last):
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 135, in _get_jobs
jobs.append(self._reconstitute_job(row.job_state))
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/jobstores/sqlalchemy.py", line 122, in _reconstitute_job
job.__setstate__(job_state)
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/job.py", line 260, in __setstate__
self.func = ref_to_obj(self.func_ref)
File "/Users/wyx/bitcoin_workspace/fibo-strategy/.env/lib/python2.7/site-packages/apscheduler/util.py", line 277, in ref_to_obj
raise LookupError('Error resolving reference %s: error looking up object' % ref)
LookupError: Error resolving reference base.strategy:Strategy.run: error looking up object
Supplementary instruction for Alex Grönholm's question here because it is hard to say in comment
In base/strategy_util.py:
base_strategy is some classes which inherit the BaseStrategy class in base/strategy.py . The BaseStrategy has its run method
def _strategy(base_strategy, minute, ticker_table_format):
class Strategy(base_strategy):
run_period = minute
def _init_params(self):
self.ticker_table_format = ticker_table_format
return Strategy()
def _create_strategy(base_strategy, minute_list=ALL_MINUTE):
strategy_list = []
for minute in minute_list:
for ticker_table_format in const.TICKER_TABLE_FORMAT.ALL:
st = _strategy(base_strategy, minute, ticker_table_format)
strategy_list.append(st)
return strategy_list
def ma_strategy(base_strategy):
return _create_strategy(base_strategy)
In MA/touch_avg.py:
MA_TOUCH_AVG inherit the BaseStrategy class
from base.strategy import MA_TOUCH_AVG
from base.strategy_util import ma_strategy
strategy = ma_strategy(MA_TOUCH_AVG)
And then I use click to call the strategy like python run_strategy.py run MA touch_avg
In run_strategy.py:
#cli.command()
#click.argument('indicator')
#click.argument('strategy_name')
def run(indicator, strategy_name):
""" run indicator strategy_name """
real_time_strategy_name = ["touch_avg", "limit"]
util.script(indicator, strategy_name,
real_time=strategy_name in real_time_strategy_name)
The reason you're having this problem is because you create a class dynamically, in a function. APScheduler stores the reference to the scheduled function as module:varname. How is the scheduler expected to find a class that you're making on the fly in a function?