Getting Started with Xero-Python SDK…nightmare - xero-api

I’m a newbie to using SDKs, using Jupiter Notebooks to play around.
I have pip installed xero-python per Xero’s github page below:
https://github.com/XeroAPI/xero-python
I saved the repository to my hard drive and opened a Jupiter Notebook within repository master folder and copied the code from the configuration section but get an error “no module called logging.settings found” (referring to the parameter passed into dictConfig)
Could someone help me get this up and running, I’m sure there will be a cascade of other errors after sorting this one.
Also, I’d be really grateful if someone could point me to some resources on how to work with SDKs, I’m used to just pip installing a library and just getting familiar with the objects/methods in that library.
Thank you!!!

Related

Developing with various versions of tensorflow

I am trying to clean a bug present in tensorflow 2.10/2.11. I know the bug but don't know how to edit the file in the corresponding versions so that i can submit a pull request. I would like to know how to go about the development process with git. Any help appreciated.
I downloaded the different versions from the releases page and tried importing the folder into python but that fails. I was expecting it to import the folder as a module. I would like to make changes in the releases and then submit a pr to the keras team.

Instal Scrapy on PythonAnywhere? (or Cloud9)

Can I run Scrapy on the free level of PythonAnywhere? I've looked, but haven't found instructions for installing it there.
If it can't be run on the free level of PythonAnywhere, is there another online environment where I can run Scrapy without needing to install Python and Scrapy on my computer?
EDIT: My question was just about PythonAnywhere, but in finding the answer to the question, I came across Cloud9 and found it to be a preferable alternative, which is explained in the answer.
Short summary:
Scrapy comes preinstalled on PythonAnywhere. No installation required.
I found an alternative that I like better: Cloud9. I was able to install Scrapy on it, but with a security issue that probably won't be a problem for me.
====================================
There were three parts to my question:
Can I run Scrapy in the free level of PythonAnywhere? This part has been answered: Yes, but with debilitating restrictions.
The other two parts have not been answered, but I've found some answers and will share them here.
What other online environments allow me to run Scrapy without needing to install Python and Scrapy on my computer? I haven't found a direct answer to this, but the free tutorial website, Python for Everybody ("Py4E"), has a page, Setting up your Python Development Environment, which lists four online Python environments. It provides a brief tutorial on PythonAnywhere and then just provides links to the other three: Trinket, Cloud9, and CodeAnywhere.
None of those four environments say anything about running Scrapy on them. With some more research, I did find out how to use Scrapy in PythonAnywhere, which I explain next below. Of the other three, Cloud9 is part of Amazon's AWS suite, which is a sophisticated set of software tools that I've used other parts of before. Based on that, I assumed it also accommodates Scrapy, and I checked it out as well. I've added the results of that below as a new part 4 to my question.
Now, the main part of my question: How to install Scrapy on PythonAnywhere? The answer is:
You don't. It's already installed!
It's amazing that PythonAnywhere's otherwise excellent documentation doesn't say anything about this. I found it out by following instructions that I hoped would lead me to installing Scrapy:
First, since I'm new to Python (but not to programming), I ran through Py4E's tutorial on PythonAnywhere, which is really a quick introduction to Python and got me to write a simple program, told me to use the Bash Unix shell instead of the Python interpreter ("$" instead of ">>>"), and had me save a simple program to a file.
Next, I went to Scrapy's installation instructions. It has this wonderful line: "... if you’re already familiar with installation of Python packages, you can install Scrapy and its dependencies from PyPI with: pip install Scrapy". Of course, it doesn't follow that by saying what to do if I'm not familiar with that. Sigh!
After that, I somehow found my way to Python's official instructions on Installing Packages, which starts by explaining that "package" means "a bundle of software to be installed", so I thought that might include Scrapy. So I ran through the instructions there, and about half-way through, it told me to run:
python3 -m pip install "SomeProject"
(* Footnote below on syntax of that command)
The instructions said that "SomeProject" is supposed to be a project that's included in the Python Package Index, so I went there and searched for Scrapy. It gave me a list of 681 projects with "scrapy" in the name, and some of them looked like they might be various versions of Scrapy itself. None of them were called just "Scrapy", but the Scrapy instruction quoted above said to use just that name. So I held my breath and entered:
python3 -m pip install Scrapy
And guess what I got? PythonAnywhere told me:
Requirement already satisfied: Scrapy in /usr/local/lib/python3.9/site-packages (2.5.0)
That was followed by a couple of dozen more lines that all started with "Requirement already satisfied", which I took to be the dependencies required by Scrapy, all of them already present and ready to roll.
So, hmmm, Scrapy is already there? To find out if that's really true, I went to the tutorial on Scrapy's website. The first thing it said was to create a project by using the command:
scrapy startproject tutorial
I entered that, and PythonAnywhere told me that it had successfully created a new project. Since this was a Scrapy command, I conclude that, yes, indeed, I already have Scrapy installed and running on PythonAnywhere. No installation necessary!
What about Cloud9? As I said above in my answer to part 2, when I found out about Cloud9, I was interested because it's part of Amazon Web Services ("AWS"). I've used other parts of AWS before and found them to be sophisticated, complicated, powerful, and well-documented. They are also very economical.
AWS is a commercial system run by Amazon. It charges fees based on usage, with no minimums, and with low-volume usage being free. The pricing page for Cloud9 shows it to be no exception. Cloud9 itself is free to use, but using it calls on other AWS resources that have charges.
The pricing page gives the following example: "If you use the default settings running an IDE for 4 hours per day for 20 days in a month with a 30-minute auto-hibernation setting your monthly charges for 90 hours of usage would be ... $2.05". That's less than half the lowest monthly cost of PythonAnywhere. (As stated in the answer by Giles Thomas, the free level of PythonAnywhere is not very useful for Scrapy.) I'm not sure how the amount of usage in the Cloud9 example compares with the amount of usage allowed by PythonAnywhere's $5/mo service, but my usage is going to be a lot less than either one, so I expect my cost of using Cloud9 to be very low, and possibly nothing. Furthermore, if I only use Scrapy for a project a couple of times a year, with PythonAnywhere, I'd have to close my account in between projects to stop being charged, but AWS doesn't charge me when I'm not using it, so I can keep the account with no cost between projects.
So based on both the quality of the AWS modules I've used and the low usage cost, I was very interested in Cloud9 as an alternative.
And I was not surprised to find that I could use Scrapy in it.
To figure that out, I quickly abandoned the webpage instructions in favor of downloading a pdf of the comprehensive User Guide from the documentation page. Comprehensive = 595 pages! But it's very well organized and cross-referenced, so I was able to learn what I needed by reading about 20 pages, which included a tutorial on using the GUI environment (pg 29..38) and another on using Python in Cloud9 (pg 423..7).
In that second tutorial, I ran:
python3 --version to find out that Python was already installed, version 3.7.10.
python -m pip --version to find out that pip version 20.2.2 is running.
After that tutorial, I was ready to find out if Scrapy is there. I had learned by then about pip show, so I ran:
python -m pip show Scrapy
The answer was no:
WARNING: Package(s) not found: Scrapy
So I repeated the command that I'd done earlier in PythonAnywhere:
python3 -m pip install Scrapy
This time, there were very few "Requirement already satisfied"s and instead there were a lot of "Collecting ... Downloading"s, followed by "Installing collected packages" and then "Successfully installed" with a long list that included Scrapy-2.6.1.
I repeated python -m pip show Scrapy and got several lines of output that told me Scrapy 2.6.1 is installed. Finally, I ran the same test I'd run before in PythonAnywhere, the first instruction in the official Scrapy tutorial:
scrapy startproject tutorial
and got the same output as before, telling me that the project had been created.
Bingo! I have Scrapy running in Cloud9.
On the negative side, there was a problem here. AWS has two levels of sign-in authority, called root users and IAM. For proper security, I should be running Cloud9 as an IAM user, but there was a problem being able to sign in that way. I posted a question on SO about that, but while waiting for an answer, went ahead and started using Cloud9 as the root user. In the course of that, I got the message:
WARNING: Running pip install with root privileges is generally not a good idea.
That warning came with a suggestion of an alternative command that didn't make sense and didn't work when I tried it. So I'm not sure how much I've messed up the security of my AWS account by what I've been doing here. My work is not secretive, so the security may be a non-issue, but I'd still like to figure out how to proceed as an IAM user and clean up any damage I might have caused by what I've been doing as the root user. If anyone knows about that, please respond to the SO question about it linked in the previous paragraph.
So now I've got Scrapy running in Cloud9, and I'm going to go find out if it can get the data I need. I'll make another edit here if there are any surprises in terms of Cloud9 either (a) not being able to do something or (b) resulting in unexpected charges.
====================================
(*) Footnote on syntax of python3 -m pip install "SomeProject":
Since I was working in something called PythonAnywhere, I was tempted to think that this was a Python command. But then I had to remember that, within PythonAnywhere, I was working in Bash, a Unix shell. So Python3 is a Unix command. I haven't found documentation of that exact command, but did of a command it's presumably based on Python. That documentation says, "-m module-name Searches ... for the named module and runs the corresponding .py file as a script." So this means that pip is a Unix module written in Python for installing Python packages. Then install <project name> is a parameter of the pip module. (Somebody please correct me if I've said any of that wrong.)
You can, but free accounts on PythonAnywhere are limited to accessing sites on a whitelist of official public APIs, so you will probably not be able to access non-API sites.

reSolve and React Native integration

Is there any working example available involving reSolve in React Native?
Suggestions of comparable solutions (running without any back-end connectivity in place) either in React Native or Flutter are also most appreciated.
GitHub contains an example in the reimagined/react-native-example repository but unfortunately it isn't working. It seems the current version is pretty outdated.
Referring to that repository, the command yarn create resolve-app -e shopping-list-advanced shopping-list-advanced results in the following error message.
Error: No such example, shopping-list-advanced. The following examples are available
So you are unable to download the sample code since it does not appear to exist.
So I tried downloading and inflating the ZIP manually. Afterwards I ran yarn install (which takes a while and reports quite a lot of warnings). Next, I used the command yarn start:native. This doesn't work either and results in the following error message.
ERROR: Node.js version 16.13.2 is no longer supported.expo-cli supports following Node.js versions: >=10.13.0 <11.0.0 (Active LTS) >=12.0.0 <13.0.0 (Active LTS) >=13.0.0 <14.0.0 (Current Release)
In an attempt to solve the problem, I updated the expo-cli version in the native\package.json file to 5.0.3. Running yarn install and yarn start:native again results in a new error message being thrown error.
Invalid regular expression:
/(ui[\]node_modules[\]react-native[\].|ui[\]node_modules[\]expo[\].|node_modules[\]react[\]dist[\].|website\node_modules\.|heapCapture\bundle.js|.\tests\.)$/:
Range out of order in character class.
This doesn't seem to go anywhere... In other words, I am a bit stuck here since I don't know what this message actually means.
Thank you for your feedback.
The team decided to extract the React Native example to a separate repository and postpone its maintenance to keep focused on more important tasks like polishing the server-side.
As you mentioned, the example is outdated, there are many changes in the client configuration since then. In the future, we may work on some guide on how to use reSolve in React Native and other frameworks.
In the meantime, you can try to add reSolve in your ReactNative app using our docs.
We provide several client libraries that can be helpful:
https://reimagined.github.io/resolve/docs/api/client/resolve-client/
https://reimagined.github.io/resolve/docs/api/client/resolve-react-hooks/
https://reimagined.github.io/resolve/docs/api/client/resolve-redux/
Feel free to contact us through Github in case of any difficulties, we'll be glad to help you.

Can't see or publish packages with Verdaccio

I’m fairly new to Verdaccio. Been familiar with the tool for quite some time, but this is my first time trying to use it. I’ve installed it locally for the purpose of trying to figure out the right syntax for handling versioning, tagging and publishing a shared component library for work, but I’m having trouble getting this package published to my locally running instance of Verdaccio, and I’m struggling to understand why the publish command is failing. Was hoping someone here might be able to help.
First off, I should say that I have it installed and running locally, I can browse to http://localhost:7890 and see the Verdaccio web UI, and it says that I have “No package published yet.” That makes sense, because I haven’t been able to successfully publish anything yet. I’ve created a user with the npm adduser —registry http://localhost:7890 command, and then after that I ran the following command to attempt to publish to it: npm publish —access public —registry http://localhost:7890. When I run this command, I get the following error: “EPUBLISHCONFLICT … Cannot publish over existing version.”
Now, I can in fact see, when I look in .local/share/verdaccio/storage that there is a folder for the scope that I published with, and in that folder, there is a folder for the package that I apparently published, and it only has a package.json file in it. I’ve attempted to wipe this all clean, reinstall Verdaccio, etc, etc, nothing seems to fix the issue. I can’t seem to make this package go away, OR to get it to display in the UI either. After publishing (unsuccessfully), and despite the face that it says this version of the package exists, I still see nothing in the UI. It still just says “No package published yet”, which I still don’t really understand.
Any ideas would be appreciated. This has me pretty stumped. Thanks.
Add your package.json name as npm registry (it needs the same one you published the last time).
I was getting this issue:
The following works for me:
npm publish --registry=http://yourhost:yourport

Failure to create reference razterizer 3D device-

I am using windows 7 and wanted to run "graphing calculator 3D" but when I execute the file initially, it requires JAVA 3D or DirectX D3D. I have installed both of them to be sure, but the error message has been changed to Failure to create reference razterizer 3D device-DD2DER not available... Is this dangerous to my computer? or what should I do to solve this problem. Thanks
Did you download Graphing Calculator 3D directly from Runiter Company's website?
If so you shouldn't need to install Java3D separately as Java3D is already included with the installer.
I suggest Uninstall any existing Java3D (because they conflict with the included version of Java3D) and then download Graphing Calculator 3D from Runiter website and install it again.
If that didn't work please post the exact error message.