I´m trying use Jython for run Apache POI but I have the next problem:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "FRTFormat.py", line 14, in <module>
from org.apache.poi.hssf.usermodel import *
ImportError: No module named apache
I execute Jython using the next script
java.exe -jar C:\dev\lang\jython\2.7.0_old\jython-standalone-2.7.0.jar
And my "Hello World" program is:
import os
import csv
import java.text.SimpleDateFormat as Sdf
from java.io import FileInputStream
from java.io import FileOutputStream
from datetime import datetime
from sys import path
path.append("C:\\dev\\poi-3.14-20160307.jar")
path.append("C:\\dev\\poi-ooxml-3.14-20160307.jar")
from org.apache.poi.hssf.usermodel import *
def ejectFRT(eje):
print ("Hello")
Can Help me someone?
Thanks in advance
Greetings
SOLVED:
The problem was in the script:
java.exe -jar C:\dev\lang\jython\2.7.0_old\jython-standalone-2.7.0.jar
It´s neccesary add:
java.exe -Dpython.cachedir.skip=false -Dpython.cachedir=./tmp -jar jython-standalone-2.7.0.jar
Related
import mysql.connector
I was trying to make a database using python. I used the code shown below after the installation of MySQL-connector-python to the python interpreter, but the following output shows in the terminal:
Traceback (most recent call last):
File "F:\coding\sql-course-materials\SQL Course Materials\mysql.py", line 1, in <module>
import mysql.connector
File "F:\coding\sql-course-materials\SQL Course Materials\mysql.py", line 1, in <module>
import mysql.connector
ModuleNotFoundError: No module named 'mysql.connector'; 'mysql' is not a package
I am working on a scraping project and have all recent downloads of python (3.9.5), VScode, Selenium, and BeautifulSoup. All the modules seem to be working correctly and I receive no errors when running the code. However, the URL is not opened by the web driver when I run the code - nothing happens? Please assist me in what I am missing so I am able to see my control window/display.
Code:
import csv
from bs4 import BeautifulSoup
from selenium import webdriver
# Startup the webdriver
driver = webdriver.Chrome()
url = 'https://www.amazon.com'
driver.get(url)
Output:
[Running] python -u "/Library/Frameworks/Python. framework/Versions/3.9/lib/python3.9/site-packages/
ScrapingTest.py"
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/element.py:16:
UserWarning: The soupsieve package is not installed. csS selectors cannot be used.
'The soupsieve package is not installed. CSS selectors cannot be used.'
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ScrapingTest.py",
line 2, in <module>
from bs4 import BeautifulSoup
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/_init_-py",
line 32, in <module>
from .builder import builder_registry, ParserRejectedMarkup
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/builder/
_init_.py", line 7, in <module>
from bs4.element import (
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/element.py",
line 19, in <module>
from bs4. formatter import (
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/formatter.py",
line 1, in <module>
from bs4.dammit import EntitySubstitution
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/dammit.py",
line 68, in <module>
class EntitySubstitution(object):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/bs4/dammit.py",
line 97, in EntitySubstitution
CHARACTER_TO_HTML_ENTITY_RE) = _populate_class_variables ()
File "/Library/Frameworks/Python. framework/Versions/3.9/lib/python3.9/site-packages/bs4/dammit.py",
line 83, in _populate_class_variables
character = chr(codepoint)
ValueError: chr() arg not in range (256)
[Done] exited with code=1 in 0.198 seconds
The error message states that:
UserWarning: The soupsieve package is not installed. css selectors cannot be used.
Hence, installing the soupsieve package using pip install soupsieve should fix the error.
I am quite new to Python. I have python27 installed in my PC(windows). I am trying to run a script in python command line. My script was named "Script". And that contains
import sys # Load a library module
print(sys.platform)
print(2 ** 100) # Raise 2 to a power
x = 'Spam!'
print(x * 8) # String repetition
and when i import the script with writing import Script, it gives this
win32
1267650600228229401496703205376
Spam!Spam!Spam!Spam!Spam!Spam!Spam!Spam!
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named py
why the error message appears here? TIA. :)
instead of writing import myscript.py simply use
import myscript
note that myscript.py should be in the same location
I was trying out QuotesSpider example given in docs (page# 5, here), but am having a hard time to get it running. I installed scrapy from conda in the root environment. I am on Ubuntu 14.04, 64 bit machine. As soon as I run the given code snippet with the following command:
$ scrapy runspider quotes_spider.py -o quotes.json
I get the following error:
Traceback (most recent call last):
File "/home/rip/miniconda2/bin/scrapy", line 4, in <module>
import scrapy.cmdline
File "/home/rip/miniconda2/lib/python2.7/site-packages/scrapy/__init__.py", line 34, in <module>
from scrapy.spiders import Spider
File "/home/rip/miniconda2/lib/python2.7/site-packages/scrapy/spiders/__init__.py", line 10, in <module>
from scrapy.http import Request
File "/home/rip/miniconda2/lib/python2.7/site-packages/scrapy/http/__init__.py", line 11, in <module>
from scrapy.http.request.form import FormRequest
File "/home/rip/miniconda2/lib/python2.7/site-packages/scrapy/http/request/form.py", line 9, in <module>
import lxml.html
File "/home/rip/miniconda2/lib/python2.7/site-packages/lxml/html/__init__.py", line 54, in <module>
from .. import etree
ImportError: libiconv.so.2: cannot open shared object file: No such file or directory
As apparent, an object file seems to be missing. Do I have to build scrapy from source, or is there an alternative?
I am using pydev plugin in Eclipse Juno for my python programming in windows 7 and i am using python 3.2, it works fine while running python application which using standard python packages. For my one of my project i have to use pandas library, for that i download and install numpy and pandas Windows installer for python 3. But while running even a small program it shows error message. So anyone have any idea about how to install and test pandas in Windows 7 by using eclipse, just pass it to me.
The error message is like this:
Traceback (most recent call last):
import numpy
File "C:\Python32\lib\site-packages\numpy\__init__.py", line 137, in <module>
from . import add_newdocs
File "C:\Python32\lib\site-packages\numpy\add_newdocs.py", line 9, in <module>
from numpy.lib import add_newdoc
File "C:\Python32\lib\site-packages\numpy\lib\__init__.py", line 4, in <module>
from .type_check import *
File "C:\Python32\lib\site-packages\numpy\lib\type_check.py", line 8, in <module>
import numpy.core.numeric as _nx
File "C:\Python32\lib\site-packages\numpy\core\__init__.py", line 40, in <module>
from numpy.testing import Tester
File "C:\Python32\lib\site-packages\numpy\testing\__init__.py", line 8, in <module>
from unittest import TestCase
File "C:\Python32\lib\unittest\__init__.py", line 59, in <module>
from .case import (TestCase, FunctionTestCase, SkipTest, skip, skipIf,
File "C:\Python32\lib\unittest\case.py", line 6, in <module>
import pprint
EOFError: EOF read where not expected
Thanks in advance for your time
I think you have Install panda: here you can find panda package for windows here
https://pypi.python.org/pypi/pandas#downloads