unable to open database file on a hosting service pythonanywhere - flask-sqlalchemy

I want to deploy my project on pythonanywhere. Error.log says that server or machine is unable to open my database. Everything works fine on my local machine. I watched a video of Pretty Printed from YouTube
This how I initialize in app.py. This what I got from error.log
db_session.global_init("db/data.sqlite")
this in db_session:
def global_init(db_file):
global __factory
if __factory:
return
if not db_file or not db_file.strip():
raise Exception("Необходимо указать файл базы данных.")
conn_str = f'sqlite:///{db_file.strip()}?check_same_thread=False'
print(f"Подключение к базе данных по адресу {conn_str}")
engine = sa.create_engine(conn_str, echo=False)
__factory = orm.sessionmaker(bind=engine)
from . import __all_models
SqlAlchemyBase.metadata.create_all(engine)
def create_session() -> Session:
global __factory
return __factory()
last thing is my wsgi.py:
import sys
path = '/home/r1chter/Chicken-beta'
if path not in sys.path:
sys.path.append(path)
import os
from dotenv import load_dotenv
project_folder = os.path.expanduser(path)
load_dotenv(os.path.join(project_folder, '.env'))
import app # noqa
application = app.app()

Usually errors like this on PythonAnywhere are due to providing relative path instead of absolute path.

Related

Can´t upload an check Excel file on Unix system

I'm trying to upload and check if one file is an Excel type, my code is:
for router:
from fastapi import APIRouter, UploadFile,File, HTTPException, status, Depends
from fastapi.responses import HTMLResponse
from app.api.deps import get_curret_admin
from app.schemas.admin import Admins
from app.core.verify_file import check_excel
router = APIRouter()
#Upload Excel File
#router.post(
path="/api/uploadfile/",
status_code=status.HTTP_200_OK,
summary="Upload a excel file",
tags= ["Files"]
)
async def upload_file(
file: UploadFile,
admin: Admins= Depends(get_curret_admin)
):
if not check_excel(file.file):
raise HTTPException(status_code=400, detail="El archivo no es de tipo Excel")
return HTMLResponse("El archivo ha sido subido correctamente")
And check:
from openpyxl import load_workbook
def check_excel(file):
try:
wb= load_workbook(file)
return True
except:
return False
The problem is this works fine on Windows, but Linux or macOS doesn't. In Windows check when upload an Excel file, but in Unix system all files are not Excel result, and don't accept anyone.
Ok, i finded the answer by my own, the problen was the version of python, in my mac and linux pc is under 3.11, that's the reason, there has and issue on TemporyFile under python 3.11. This solve using io.BytesIO. Like this page:
https://asiones.hashnode.dev/fastapi-receive-a-xlsx-file-and-read-it

Trying to deploy an ml model as an api through bentoml

import bentoml
import numpy as np
from bentoml.io import NumpyNdarray
# Get the runner
xgb_runner = bentoml.models.get("xgb_booster:latest").to_runner()
# Create a Service object
svc = bentoml.Service("xgb_classifier", runners=[xgb_runner])
# Create an endpoint named classify
#svc.api(input=NumpyNdarray(), output=NumpyNdarray())
def classify(input_series) -> np.ndarray:
# Convert the input string to numpy array
label = xgb_runner.predict.run(input_series)
return label
bentoml serve service.py:svc --reload
so this is my code
and the error i am getting is
Error: [bentoml-cli] serve failed: Failed to load bento or import service 'Service.py:svc'.
If you are attempting to import bento in local store: 'Failed to import module "Service.py": No module named 'Service.py''.
If you are importing by python module path: 'Bento 'Service.py:svc' is not found in BentoML store <osfs '/root/bentoml/bentos'>'.
so i tried pip install service

How to write a python-for-android recipe for a package in a local directory and not a zip file url?

I have
buildozer.spec
recipes/
myrecipe/
__init__.py
mypackage/
setup.py
code.py
But when I try to write a recipe with a file:// URL as seen when googling this issue, I get an error Exception: Given path is neither a file nor a directory: /home/user/project/.buildozer/android/platform/build-armeabi-v7a/packages/mypackage/mypackage (not the mypackage twice).
How can I achieve this?
There is an IncludedFilesBehaviour mixin just for this, just give it a relative path with src_filename:
from pythonforandroid.recipe import IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe
import os
import sys
class MyRecipe(IncludedFilesBehaviour, CppCompiledComponentsPythonRecipe):
version = 'stable'
src_filename = "../../../phase-engine"
name = 'phase-engine'
depends = ['setuptools']
call_hostpython_via_targetpython = False
install_in_hostpython = True
def get_recipe_env(self, arch):
env = super().get_recipe_env(arch)
env['LDFLAGS'] += ' -lc++_shared'
return env
recipe = MyRecipe()

why scrapy logs different in the console and external log file

I'm new to Scrapy and I once managed to run my script well on Scrapy 0.24. But when I switched to the newly launched 1.0 I encountered a logging problem: What I want to do is to set both the file and the console log level to INFO, but however I set the LOG_LEVEL or the configure_logging() function(using the Python internal logging package instead of scrapy.log), Scrapy always logs DEBUG level information to the console, which returns the whole item object in format of dict. In fact, the LOG_LEVEL option only works for the external file. I suspect it must have something to do with the Python logging but have no idea how to set it. Could any one help me out?
This is how I config my logging in run_my_spider.py:
from crawler.settings import LOG_FILE, LOG_FORMAT
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from scrapy.utils.log import configure_logging
from crawler.spiders.MySpiders import MySpider
import logging
def run_spider(spider):
settings = get_project_settings()
# configure file logging
# It ONLY works for the file
configure_logging({'LOG_FORMAT': LOG_FORMAT,
'LOG_ENABLEED' : True,
'LOG_FILE' : LOG_FILE,
'LOG_LEVEL' : 'INFO',
'LOG_STDOUT' : True})
# instantiate spider
process = CrawlerProcess(settings)
process.crawl(MySpider)
logging.info('Running Crawler: ' + spider.name)
process.start() # the script will block here until the spider_closed signal was sent
logging.info('Crawler ' + spider.name + ' stopped.\n')
......
This is the console output:
DEBUG:scrapy.core.engine:Crawled (200) <GET http://mil.news.sina.com.cn/2014-10-09/0450804543.html>(referer: http://rss.sina.com.cn/rollnews/jczs/20141009.js)
{'item_name': 'item_sina_news_reply',
'news_id': u'jc:27-1-804530',
'reply_id': u'jc:27-1-804530:1',
'reply_lastcrawl': '1438605374.41',
'reply_table': 'news_reply_20141009'}
Many Thanks!
It may be that what you are viewing in the console is the Twisted Logs.
It will print the Debug level messages to the console.
You can redirect them to your log files using:
from twisted.python import log
observer = log.PythonLoggingObserver(loggerName='logname')
observer.start()
(As given in How to make Twisted use Python logging?)

ImproperlyConfigured: Requested setting MIDDLEWARE_CLASSES, but settings are not configured

i am confuring the apache mod_wsgi to django project
and here is my djangotest.wsgi file
import os
import sys
sys.path = ['/home/pavan/djangoproject'] + sys.path
os.environ['DJANGO_SETTINS_MODULE'] = 'djangoproject.settings'
import django.core.handlers.wsgi
_application = django.core.handlers.wsgi.WSGIHandler()
def application(environ, start_response):
environ['PATH_INFO'] = environ['SCRIPT_NAME'] + environ['PATH_INFO']
return _application(environ, start_response)
and i add the WSGIScrptAlias to my virtual directory
when i try to get the homepage of the project it says the following error
ImproperlyConfigured: Requested setting MIDDLEWARE_CLASSES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings.
Looks like you have a typo on line 4. 'DJANGO_SETTINS_MODULE' should be 'DJANGO_SETTINGS_MODULE'.