I am trying to run this code using Notepad++ version 8.14
from dearpygui.core import *
from dearpygui.simple import *
#window object setting
set_main_window_size(540, 720)
set_global_font_scale(1.25)
set_theme("Gold")
with window("SMS SMS Spam Filter", width = 520, height = 667):
print("GUI is runngin")
start_dearpygui()
but the output is error which is
from dearpygui.core import *
ModuleNotFoundError: No module named 'dearpygui.core'
I have tried pip install dearpygui on command prompt, but it showed the same. Anyone can solve this?
DearPyGui is under heavy devlopment and the code you are trying to run is the "old" way of doing things (prior to version 0.6). Here is the comparaison between an old and an up-to-date version of the library :
Old version
from dearpygui.core import *
def save_callback(sender, data):
print("Save Clicked")
add_text("Hello, world")
add_button("Save", callback=save_callback)
add_input_text("string", default_value="Quick brown fox")
add_slider_float("float", default_value=0.273, max_value=1)
start_dearpygui()
New version
import dearpygui.dearpygui as dpg
def save_callback():
print("Save Clicked")
with dpg.window(label="Example Window"):
dpg.add_text("Hello, world")
dpg.add_button(label="Save", callback=save_callback)
dpg.add_input_text(label="string", default_value="Quick brown fox")
dpg.add_slider_float(label="float", default_value=0.273, max_value=1)
dpg.start_dearpygui()
See the docs for more details.
I tried the new version but received the segmentation fault message.
Related
I have my postgres installed on PC1 and I am connecting to the database using PC2. I have modified the settings so that postgres on PC1 is accessible to local network.
On PC2 I am doing the following:
import pandas as pd, pyodbc
from sqlalchemy import create_engine
z1 = create_engine('postgresql://postgres:***#192.168.40.154:5432/myDB')
z2 = pd.read_sql(fr"""select * from public."myTable" """, z1)
I get the error:
File "C:\Program Files\Python311\Lib\site-packages\pandas\io\sql.py", line 1405, in execute
return self.connectable.execution_options().execute(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'OptionEngine' object has no attribute 'execute'
While running the same code on PC1 I get no error.
I just noticed that it happens only when reading from the db. If I do to_sql it works. Seems there is missing on the PC2 as instead of trying 192.168.40.154:5432 if I use localhost:5432 I get the same error.
Edit:
Following modification worked but not sure why. Can someone please educate me what could be the reason for this.
from sqlalchemy.sql import text
connection = connection = z1.connect()
stmt = text("SELECT * FROM public.myTable")
z2 = pd.read_sql(stmt, connection)
Edit2:
PC1:
pd.__version__
'1.5.2'
import sqlalchemy
sqlalchemy.__version__
'1.4.46'
PC2:
pd.__version__
'1.5.3'
import sqlalchemy
sqlalchemy.__version__
'2.0.0'
Does it mean that if I update the packages on PC1 everything is going to break?
I ran into the same problem just today and basically it's the SQLalchemy version, if you look at the documentation here the SQLalchemy version 2.0.0 was released a few days ago so pandas is not updated, for now I think the solution is sticking with the 1.4.x version.
The sqlalchemy.sql.text() part is not the issue. The addition of connection() to the connect_engine() instruction seems to have done the trick.
You should also use a context manager in addition to a SQLAlchemy SQL clause using text, e.g.:
import pandas as pd, pyodbc
from sqlalchemy import create_engine, text
engine = create_engine('postgresql://postgres:***#192.168.40.154:5432/myDB')
with engine.begin() as connection:
res = pd.read_sql(
sql=text(fr'SELECT * FROM public."myTable"'),
con=connection,
)
As explained here https://pandas.pydata.org/docs/reference/api/pandas.read_sql.html :
conSQLAlchemy connectable, str, or sqlite3 connection
Using SQLAlchemy makes it possible to use any DB supported by that library. If a DBAPI2 object, only sqlite3 is supported. The user is
responsible for engine disposal and connection closure for the
SQLAlchemy connectable; str connections are closed automatically. See
here.
--> especially this point: https://docs.sqlalchemy.org/en/20/core/connections.html#connect-and-begin-once-from-the-engine
querying data from BigQuery has been working for me. Then I updated my google packages (e. g. google-cloud-bigquery) and suddenly I could no longer download data. Unfortunately, I don't know the old version of the package I was using any more. Now, I'm using version '1.26.1' of google-cloud-bigquery.
Here is my code which was running:
from google.cloud import bigquery
from google.oauth2 import service_account
import pandas as pd
KEY_FILE_LOCATION = "path_to_json"
PROCECT_ID = 'bigquery-123454'
credentials = service_account.Credentials.from_service_account_file(KEY_FILE_LOCATION)
client = bigquery.Client(credentials= credentials,project=PROCECT_ID)
query_job = client.query("""
SELECT
x,
y
FROM
`bigquery-123454.624526435.ga_sessions_*`
WHERE
_TABLE_SUFFIX BETWEEN '20200501' AND '20200502'
""")
results = query_job.result()
df = results.to_dataframe()
Except of the last line df = results.to_dataframe() the code works perfectly. Now I get a weired error which consists of three parts:
Part 1:
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{"created":"#1596627109.629000000","description":"Failed to pick subchannel","file":"src/core/ext/filters/client_channel/client_channel.cc","file_line":3948,"referenced_errors":[{"created":"#1596627109.629000000","description":"failed to connect to all addresses","file":"src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":394,"grpc_status":14}]}"
>
Part 2:
ServiceUnavailable: 503 failed to connect to all addresses
Part 3:
RetryError: Deadline of 600.0s exceeded while calling functools.partial(<function _wrap_unary_errors.<locals>.error_remapped_callable at 0x0000000010BD3C80>, table_reference {
project_id: "bigquery-123454"
dataset_id: "_a0003e6c1ab4h23rfaf0d9cf49ac0e90083ca349e"
table_id: "anon2d0jth_f891_40f5_8c63_76e21ab5b6f5"
}
requested_streams: 1
read_options {
}
format: ARROW
parent: "projects/bigquery-123454"
, metadata=[('x-goog-request-params', 'table_reference.project_id=bigquery-123454&table_reference.dataset_id=_a0003e6c1abanaw4egacf0d9cf49ac0e90083ca349e'), ('x-goog-api-client', 'gl-python/3.7.3 grpc/1.30.0 gax/1.22.0 gapic/1.0.0')]), last exception: 503 failed to connect to all addresses
I don't have an explanation for this error. I don't think it has something to do with me updating the packages.
Once I had problems with the proxy but these problems caused another/different error.
My colleague said that the project "bigquery-123454" is still available in BigQuery.
Any ideas?
Thanks for your help in advance!
503 error occurs when there is a network issue. Try again after some time or retry the job.
You can read more about the error on Google Cloud Page
I found the answer:
After downgrading the package "google-cloud-bigquery" from version 1.26.1 to 1.18.1 the code worked again! So the new package caused the errors.
I downgraded the package using pip install google-cloud-bigquery==1.18.1 --force-reinstall
I'm trying to use pyusb over usblib1.0 to read the data from an old Ps2 mouse using a Ps2 to USB adapter that represents it as a HID device.
I am able to access the device, but when I try to send it the GET_REPORT request over control transfer, it shows me this error:
[Errno None] b'libusb0-dll:err [claim_interface] could not claim interface 0, win error: The parameter is incorrect.\r\n'
Here is my code:
import usb.core as core
import time
from usb.core import USBError as USBError
dev = core.find(idVendor=0x13ba, idProduct=0x0018, address=0x03)
interface = 0
endpoint = dev[0][(interface, 0)][0]
dev.set_configuration()
collected = 0
attempts = 50
while collected < attempts:
try:
print(dev.ctrl_transfer(0b10100001, 0x01, wValue=100, data_or_wLength=64))
collected += 1
except USBError as e:
print(e)
time.sleep(0.1)
I'm using python 3.x in windows 10 (Lenovo g-510 if it matters to anyone)
The driver I installed is a usblib-win32 using Zadig
Any help will be appreciated!
Thanks
EDIT:
Tried using WIN-USB so that It'll work with usblib-1.0
It didn't find the device. Returned None from the usb.core.fןnd() function
Continuing with WIN-USB with usblib-1.0, I successfully found the device, but no it appears to have no configuration.
dev.set_configuration()
returns:
File "C:\Users\Idan Stark\AppData\Local\Programs\Python\Python36-32\lib\site-packages\usb\backend\libusb1.py", line 595, in _check
raise USBError(_strerror(ret), ret, _libusb_errno[ret])
usb.core.USBError: [Errno 2] Entity not found
Any help will be appreciated, in usblib-1.0 or usblib-0.1, anything to make this work! Thank you!
in .py file:
import matplotlib.pyplot as plt
….
Pic=fields.Binary(‘Picture’)
….
x=[1,2,3,4]
y=[4,7,9,8]
plt.plot(x,y)
Now I want “Pic” to show the figure made by “plt.plot(x,y)” on .py file, how shall I?
in addtion, what if make “Pic” to show picture saved in “/home/user/pic.png” by python code?
-----------------------------update according to Trần Khải Hoàng's advice-------------------------------------------
.py codes:
#api.multi
def plotfig(self,cr):
x=[1,2,3,4]
y=[4,7,9,8]
plt.plot(x,y)
tem='/tmp/%s.png' % cr['uid']
plt.savefig(tem)
pic_data=open(tem,'rb').read()
self.write({'Pic':base64.encodestring(pic_data)})
os.remove(tem)
Now when user create a record and clicks button "plotfig", a figure will be shown on "Pic"; all seams ok untill now(in addition, how shall I decide the size of "Fig" by codes);
but if the user create another record and clicks button "plotfig" again, he/she will get warning: "RuntimeError: main thread is not in main loop"; sometime the warning is "Fatal Python error: GC object already tracked Aborted" / "Segmentation fault" and the Odoo server will shut down automaticly.
if I click "Ctrl+c" to stop Odoo server, I will also get warning: "RuntimeError: main thread is not in main loop";
I don;t know how to resolve these problems.
You have to:
Save the plot to file image
Read the file and save in Odoo binary field
import matplotlib.pyplot as plt
x=[1,2,3,4]
y=[4,7,9,8]
plt.plot(x,y)
plt.savefig('/home/user/pic.png')
pic_data = open('/home/user/pic.png','rb').read()
self.write({'Pic':base64.encodestring(pic_data )})
I am getting below error after executing below code. Am I missing something in the installation? I am using spark installed on my local mac and so I am checking to see if I need to install additional libraries for below code to work and load data from bigquery.
Py4JJavaError Traceback (most recent call last)
<ipython-input-8-9d6701949cac> in <module>()
13 "com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
14 "org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject",
---> 15 conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"],
Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.newAPIHadoopRDD.
: java.lang.ClassNotFoundException: com.google.gson.JsonObject
import json
import pyspark
sc = pyspark.SparkContext()
hadoopConf=sc._jsc.hadoopConfiguration()
hadoopConf.get("fs.gs.system.bucket")
conf = {"mapred.bq.project.id": "<project_id>", "mapred.bq.gcs.bucket": "<bucket>",
"mapred.bq.input.project.id": "publicdata",
"mapred.bq.input.dataset.id":"samples",
"mapred.bq.input.table.id": "shakespeare" }
tableData = sc.newAPIHadoopRDD(
"com.google.cloud.hadoop.io.bigquery.JsonTextBigQueryInputFormat",
"org.apache.hadoop.io.LongWritable", "com.google.gson.JsonObject",
conf=conf).map(lambda k: json.loads(k[1])).map(lambda x: (x["word"],
int(x["word_count"]))).reduceByKey(lambda x,y: x+y)
print tableData.take(10)
The error "java.lang.ClassNotFoundException: com.google.gson.JsonObject" seems to hint that a library is missing.
Please try adding the gson jar to your path: http://search.maven.org/#artifactdetails|com.google.code.gson|gson|2.6.1|jar
Highlighting something buried in the connector link in Felipe's response: the bq connector used to be included by default in Cloud Dataproc, but was dropped starting at v 1.3. The link shows you three ways to get it back.