5:31:35 in the video, Typed the codes in exactly the same way and ran it but I keep getting this error
INFO: Could not find files for the given pattern(s).
Brownie v1.18.1 - Python development framework for Ethereum
File "C:\Users\Morounfola\AppData\Local\Programs\Python\Python38\lib\site-packages\brownie_cli_main_.py", line 64, in main
importlib.import_module(f"brownie._cli.{cmd}").main()
File "C:\Users\Morounfola\AppData\Local\Programs\Python\Python38\lib\site-packages\brownie_cli\run.py", line 42, in main
active_project.load_config()
File "C:\Users\Morounfola\AppData\Local\Programs\Python\Python38\lib\site-packages\brownie\project\main.py", line 462, in load_config
_load_project_config(self._path)
File "C:\Users\Morounfola\AppData\Local\Programs\Python\Python38\lib\site-packages\brownie_config.py", line 222, in _load_project_config
and "cmd_settings" in values
TypeError: argument of type 'NoneType' is not iterable
This is the code;
from brownie import FundMe, MockV3Aggregator, network, config
from scripts.helpful_scripts import get_account
def deploy_fund_me():
account = get_account()
# pass the pricefeed address to our fund me contract
# if we are on a persistent address like rinkeby, use the associated address
# otherwise, deploy mocks
if network.show_active != "development":
price_feed_address = config["networks"][network.show_active()][
"eth_usd_price_feed"
]
else:
print(f"The active network is{network.show_active()}")
print("Deploying Mocks...")
mock_aggregator = MockV3Aggregator.deploy(
18, 200000000000000000000, {"from": account}
)
price_feed_address = mock_aggregator.address
print("Mocks Deployed!")
fund_me = FundMe.deploy(price_feed_address, {"from": account}, publish_source=True)
print(f"Contract deployed to {fund_me.address}")
def main():
deploy_fund_me()
I answered this here:
File "brownie/_config.py", line 222, in _load_project_config and "cmd_settings" in values TypeError: argument of type 'NoneType' is not iterable
I had the same exact issue and it was because my brownie-config.yml file was incorrect. You can't have any blank variables in your config file.
Under networks I had:
networks:
rinkeby:
eth_usd_price_feed: "0x8A753747A1Fa494EC906cE90E9f37563A8AF630e"
verify: True
kovan:
mainnet:
having 'kovan' and 'mainnet' set to blank caused the error.
The solution is to either delete those two lines or comment them out like this:
networks:
rinkeby:
eth_usd_price_feed: "0x8A753747A1Fa494EC906cE90E9f37563A8AF630e"
verify: True
# kovan:
# mainnet:
Related
I'm running some code that works when there is GPU. But I'm trying to figure out how to run it locally with CPU. Here's the error:
2022-07-06 17:58:39,042 - INFO - allennlp.common.plugins - Plugin allennlp_models available
Traceback (most recent call last):
File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/bin/allennlp", line 8, in <module>
sys.exit(run())
File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/__main__.py", line 34, in run
main(prog="allennlp")
File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/__init__.py", line 118, in main
args.func(args)
File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 205, in _predict
predictor = _get_predictor(args)
File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/commands/predict.py", line 105, in _get_predictor
check_for_gpu(args.cuda_device)
File "/Users/xiaoqingwan/opt/miniconda3/envs/absa/lib/python3.7/site-packages/allennlp/common/checks.py", line 131, in check_for_gpu
" 'trainer.cuda_device=-1' in the json config file." + torch_gpu_error
allennlp.common.checks.ConfigurationError: **Experiment specified a GPU but none is available; if you want to run on CPU use the override 'trainer.cuda_device=-1' in the json config file.**
module 'torch.cuda' has no attribute '_check_driver'
Could you give me some guidance on what to do? Where is the config file and what is it called?
Here's the code (originally from: https://colab.research.google.com/drive/1F9zW_nVkwfwIVXTOA_juFDrlPz5TLjpK?usp=sharing):
# Use pretrained SpanModel weights for prediction
import sys
sys.path.append("aste")
from pathlib import Path
from data_utils import Data, Sentence, SplitEnum
from wrapper import SpanModel
def predict_sentence(text: str, model: SpanModel) -> Sentence:
path_in = "temp_in.txt"
path_out = "temp_out.txt"
sent = Sentence(tokens=text.split(), triples=[], pos=[], is_labeled=False, weight=1, id=1)
data = Data(root=Path(), data_split=SplitEnum.test, sentences=[sent])
data.save_to_path(path_in)
model.predict(path_in, path_out)
data = Data.load_from_full_path(path_out)
return data.sentences[0]
text = "Did not enjoy the new Windows 8 and touchscreen functions ."
model = SpanModel(save_dir="pretrained_14lap", random_seed=0)
sent = predict_sentence(text, model)
Try using something like:
device = torch.device("cpu")
model = SpanModel(save_dir="pretrained_14lap", random_seed=0)
model.to(device)
The config file is inside of the model.tar.gz in the pretrained_14lap directory (it is always named config.json). It also contains the param "cuda_device": 0, which may be causing your problem.
I have a custom algorithm for text prediction. I want to deploy that in sagemaker. I am following this tutorial.
https://docs.aws.amazon.com/sagemaker/latest/dg/tf-example1.html
The only change from the tutorial is.
from sagemaker.tensorflow import TensorFlow
iris_estimator = TensorFlow(entry_point='/home/ec2-user/SageMaker/sagemaker.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.c4.xlarge',
training_steps=1000,
evaluation_steps=100, source_dir="./", requirements_file="requirements.txt")
.
%%time
import boto3
train_data_location = 's3://sagemaker-<my bucket>'
iris_estimator.fit(train_data_location)
INFO: the dataset is at the root of the bucket.
error log
ValueError: Error training sagemaker-tensorflow-2018-06-19-07-11-13-634: Failed Reason: AlgorithmError: uncaught exception during training: Import by filename is not supported.
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/container_support/training.py", line 36, in start
fw.train()
File "/usr/local/lib/python2.7/dist-packages/tf_container/train_entry_point.py", line 143, in train
customer_script = env.import_user_module()
File "/usr/local/lib/python2.7/dist-packages/container_support/environment.py", line 101, in import_user_module
user_module = importlib.import_module(script)
File "/usr/lib/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
ImportError: Import by filename is not supported.
I solved this issue, The problem was using absolute path for entry_point.
when you use a source_dir parameter the path to the entry_point should be relative to the source_dir
I solved with:
region = boto3.Session().region_name
train_data_location = 's3://sagemaker-<my bucket>'.format(region)
Maybe I'm getting something basic confused but I cant seem to work out how to fix this issue.
The compiler gives me
Traceback (most recent call last):
File "python", line 153, in <module>
File "python", line 90, in finish
File "python", line 25, in registered
AttributeError: 'Marathon' object has no attribute 'runnerList'
Which all seem to be the same issue. Surely the instance does have members. I'm not sure why it thinks it doesn't.
class Marathon:
# Creator
# -------
runnersList = []
timesList = []
def __init__(self):
"""Set up this marathon without any runners."""
# Inspectors
# ----------
# These are called anytime.
def registered(self, runner):
"""Return True if runner has registered, otherwise False."""
for item in self.runnerList:
if item == runner:
return True
else:
return False
I have written a scrapy scraper that writes data out using the JsonItemExporter and I have worked out how to export this data to my AWS S3 using the following Spider Settings in ScrapingHub
AWS_ACCESS_KEY_ID = AAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY = Abababababababababababababababababababab
FEED_FORMAT = json
FEED_URI = s3://scraper-dexi/my-folder/jobs-001.json
What I need to do is dynamically set the date / time on the output file and I would love it if it was using a date and time format like this jobs-20171215-1000.json but I don't know how to set a dynamic FEED_URI with scrapinghub.
There is not much information online and the only example I can find is here on the scraping hub site but unfortunately it does not work.
When I apply these settings based on the example in the documentation
AWS_ACCESS_KEY_ID = AAAAAAAAAAAAAAAAAAAA
AWS_SECRET_ACCESS_KEY = Abababababababababababababababababababab
FEED_FORMAT = json
FEED_URI = s3://scraper-dexi/my-folder/jobs-%(time).json
Note the %(time) in my URI
The scraping fails with the following errors
[scrapy.utils.signal] Error caught on signal handler: <bound method ?.open_spider of <scrapy.extensions.feedexport.FeedExporter object at 0x7fd11625d410>> Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/local/lib/python2.7/site-packages/scrapy/extensions/feedexport.py", line 190, in open_spider
uri = self.urifmt % self._get_uri_params(spider)
ValueError: unsupported format character 'j' (0x6a) at index 53
[scrapy.utils.signal] Error caught on signal handler: <bound method ?.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7fd11625d410>> Less
Traceback (most recent call last):
File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/local/lib/python2.7/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/local/lib/python2.7/site-packages/scrapy/extensions/feedexport.py", line 220, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
I misunderstood the importance of the s in the documentation and did not realize that it was part of the token signature.
I altered
FEED_URI = s3://scraper-dexi/my-folder/jobs-%(time).json
to
FEED_URI = s3://scraper-dexi/my-folder/jobs-%(time)s.json
as per the documentation and solved the problem
%(time)
changed to
%(time)s
I have the following wlst script:
import wlstModule
from com.bea.wli.sb.management.configuration import SessionManagementMBean
from com.bea.wli.sb.management.configuration import ALSBConfigurationMBean
from com.bea.wli.config import Ref
#=======================================================================================
# Utility function to read a binary file
#=======================================================================================
def readBinaryFile(fileName):
file = open(fileName, 'rb')
bytes = file.read()
return bytes
#=======================================================================================
# Utility function to create an arbitrary session name
#=======================================================================================
def createSessionName():
sessionName = String("SessionScript"+Long(System.currentTimeMillis()).toString())
return sessionName
def getSessionManagementMBean(sessionName):
SessionMBean = findService("SessionManagement", "com.bea.wli.sb.management.configuration.SessionManagementMBean")
SessionMBean.createSession(sessionName)
return SessionMBean
SessionMBean = None
importJar='C:\\OSB_PROJECT.jar'
theBytes = readBinaryFile(importJar)
sessionName = createSessionName()
SessionMBean = getSessionManagementMBean(sessionName)
The result is an error:
wls:/offline> execfile('C:\script.py') Traceback (innermost last):
File "", line 1, in ? File "C:\script.py", line 31, in ?
File "C:\script.py", line 22, in get SessionManagementMBean
NameError: findService
How can I fix this?
Are you ever connecting to your server and accessing the domain runtime? You should be doing something like the following:
connect("weblogic", "weblogic", "t3://localhost:7001")
domainRuntime()
# obtain session management mbean to create a session.
# This mbean instance can be used more than once to
# create/discard/commit many sessions
sessionMBean = findService(SessionManagementMBean.NAME,SessionManagementMBean.TYPE)
See more here:
http://docs.oracle.com/cd/E13171_01/alsb/docs25/javadoc/com/bea/wli/sb/management/configuration/SessionManagementMBean.html