Optapy constraint testing - testing

Started out using Optapy as I am slightly more familiar with python than java. I would like to write some tests for my constraints to ensure they work correctly but I can't seem to find any documentation or examples of a test class in python and how to execute/run it?
Not sure if this is supported in Octapy or only in octaplanner.
Any help or guidance would be appreciated.
Thanks

As of 8.30.0b0, ConstraintVerifier testing is now supported in optapy. First, create a ConstraintVerifier from your #constraint_provider function:
from optapy.test import ConstraintVerifier, constraint_verifier_build
from domain import Timeslot, Room, Lesson, TimeTable
from constraints import define_constraints
constraint_verifier: ConstraintVerifier = constraint_verifier_build(define_constraints, TimeTable, Lesson)
(or alternatively, from your SolverConfig)
from optapy.test import ConstraintVerifier, constraint_verifier_create
constraint_verifier = constraint_verifier_create(solver_config)
Then you can create tests for particular constraints:
from constraints import room_conflict
ROOM1 = Room(1, "Room1")
ROOM2 = Room(2, "Room2")
TIMESLOT1 = Timeslot(1, 'MONDAY', time(12, 0), time(13, 0))
TIMESLOT2 = Timeslot(2, 'TUESDAY', time(12, 0), time(13, 0))
TIMESLOT3 = Timeslot(3, 'TUESDAY', time(13, 0), time(14, 0))
TIMESLOT4 = Timeslot(4, 'TUESDAY', time(15, 0), time(16, 0))
def test_room_conflict():
first_lesson = Lesson(1, "Subject1", "Teacher1", "Group1", TIMESLOT1, ROOM1)
conflicting_lesson = Lesson(2, "Subject2", "Teacher2", "Group2", TIMESLOT1, ROOM1)
non_conflicting_lesson = Lesson(3, "Subject3", "Teacher3", "Group3", TIMESLOT2, ROOM1)
constraint_verifier.verify_that(room_conflict) \
.given(first_lesson, conflicting_lesson, non_conflicting_lesson) \
.penalizes_by(1)
This tests the constraint room_conflict in isolation from all the other constraints. You can also test all constraints using verify_that() (no parameters) and replace penalizes_by with scores.
For a complete example, see the tests in the optapy school timetabling quickstart.

Related

SQLAlchemy IntegrityError Relationship using object already in the database

This is my model, made with classical mapping, classes A and B are working accordingly
import sqlalchemy as sa
from sqlalchemy.orm import mapper, relationship
from domain.a import A
from domain.b import B
from app_extentions import metadata
a_table = sa.Table(
'a', metadata,
sa.Column('description', sa.String(30), primary_key=True), # I think this is important
sa.Column('value_x', sa.Boolean()),
)
b_table = sa.Table(
'b', metadata,
sa.Column('id', sa.BigInteger, primary_key=True, autoincrement=True),
sa.Column('description', sa.String(50), sa.ForeignKey(a_table.c.description), nullable=False),
sa.Column('value_y', sa.String(20), nullable=True),
)
mapper(A, a_table)
mapper(B, b_table, properties={
'rel': relationship(
A, primaryjoin=(a_table.c.description == b_table.c.description)
),
})
When I do this using pytest
obj1:A = retrieve_A_object() # A is already in the DB, I get it
obj2:B = create_B_object() # this is created now, it is brand new
obj2.rel = obj1
session = get_session()
session.add(obj2)
session.commit()
SQLAlchemy raises an error
def do_execute(self, cursor, statement, parameters, context=None):
> cursor.execute(statement, parameters)
E sqlalchemy.exc.IntegrityError: (psycopg2.errors.UniqueViolation) duplicate key value violates unique constraint "a_pkey"
E DETAIL: Key (description)=('MY DESCRIPTION') already exists.
I know that it's already in the db, I want to save B object
How can i solve this? Why is this happening?

Apscheduler+scrapy signal only works in main thread

i want to combine apscheduler with scrapy.but my code is wrong.
How should i modify it?
settings = get_project_settings()
configure_logging(settings)
runner = CrawlerRunner(settings)
#defer.inlineCallbacks
def crawl():
reactor.run()
yield runner.crawl(Jobaispider)#this is my spider
yield runner.crawl(Jobpythonspider)#this is my spider
reactor.stop()
sched = BlockingScheduler()
sched.add_job(crawl, 'date', run_date=datetime(2018, 12, 4, 10, 45, 10))
sched.start()
Error:builtins.ValueError: signal only works in main thread
This question has been answered in good detail here: How to integrate Flask & Scrapy? where it covers a variety of usecases and ideas. I also found one of the links in that thread very useful: https://github.com/notoriousno/scrapy-flask
To answer your question more directly, try this out. It uses the solution from the above two links, in particular, it uses the crochet library.
import crochet
crochet.setup()
settings = get_project_settings()
configure_logging(settings)
runner = CrawlerRunner(settings)
# Note: Removing defer here for the example
##defer.inlineCallbacks
#crochet.run_in_reactor
def crawl():
runner.crawl(Jobaispider)#this is my spider
runner.crawl(Jobpythonspider)#this is my spider
sched = BlockingScheduler()
sched.add_job(crawl, 'date', run_date=datetime(2018, 12, 4, 10, 45, 10))
sched.start()

BigQuery: Credentials asked in the automated job

I have a python application/job which is pushing the dataframe to BigQuery. However that job is failing because evidently it is asking for the credentials as show below:
Please visit this URL to authorize this application:
As this is an automated job, I can't click the link and submit the code. Is there any other way to pass the authorization?
I have already setup the service account key in my environment variable / bashrc.
Code:
from datetime import timedelta
import pandas as pd
from io import StringIO
from azure.storage.blob import BlockBlobService
class Transmitter:
def __init__(self):
self.blob_service = BlockBlobService(account_name='xxxx',
account_key='xxxxxxxxxxxxx')
self.dataset_id = 'xxxx'
self.jobQuery = "select JobID, EmailName from xxxxx group by JobID, EmailName"
self.keyDf = pd.read_csv('jobKeys.csv')
def toBigQJobs(self):
jDf = pd.read_gbq(self.jobQuery, project_id='xxxx', dialect='standard')
jDf['Type'] = 'C'
jDf['Category'] = 'other'
for index, row in jDf.iterrows():
for indexA, rowA in self.keyDf.iterrows():
if rowA['Key'] in row['EmailName']:
jDf.loc[index, 'Category'] = rowA['Category']
jDf.loc[index, 'Type'] = rowA['Type']
break
jDf.to_gbq(destination_table='xxxx', project_id='xxxx',
if_exists='replace')
if __name__ == '__main__':
objTransmitter = Transmitter()
objTransmitter.toBigQJobs()
Solution: Add environment variable through os.environ and it worked.

Unit testing both Excel formulas and VB with the same tool?

Are there any tools that allow for unit testing both Excel formulas and Visual Basic forms within Excel? I'm finding methods that will do one or the other, but not both. Rubberduck for example looks promising for testing VBA, but does not appear to allow testing of formulas within Excel spreadsheets.
The only way I have found to "Unit Test" functions within Excel is to create a sheet within you workbook for which it's purpose in life is to be the validations page. Define Various additional functions within this sheet which look for edge cases and check additions etc within your workbook.
It is quite helpful to keep a comments field as well as a boolean success field that can be aggregated to put a custom format message on your other input pages to cue the user to a failed "Unit Test".
This approach can work quite well in making your testing reusable as well as transparent to the end users of your workbooks.
It is possible to unit test both Excel formulas and VBA using FlyingKoala. FlyingKoala is an extension of xlwings.
xlwings offers a COM wrapper which provides the ability to execute VBA from Python (by getting Excel to run it). It's a great library/solution. The esteemed Felix from ZoomerAnalyitics has written a blogpost about unit testing VBA using xlwings with examples.
FlyingKoala uses a library (xlcalculator) to convert Excel formulas into Python which can then be unit tested in Python's unittest framework. So it's possible to evaluate the formula and check against a known goal value whether that is in Excel or pre-defined.
An example of unit testing formulas using FlyingKoala while Excel is running;
import unittest
import logging
import xlwings as xw
from flyingkoala import FlyingKoala
from pandas import DataFrame
from pandas import Series
from numpy import array
from numpy.testing import assert_array_equal
from pandas.testing import assert_series_equal
logging.basicConfig(level=logging.ERROR)
class Test_equation_1(unittest.TestCase):
def setUp(self):
self.workbook_name = r'growing_degrees_day.xlsm'
if len(xw.apps) == 0:
raise "We need an Excel workbook open for this unit test."
self.my_fk = FlyingKoala(self.workbook_name, load_koala=True)
self.my_fk.reload_koala('')
self.equation_name = xw.Range('Equation_1')
if self.equation_name not in self.my_fk.koala_models.keys():
model = None
wb = xw.books[self.workbook_name]
wb.activate()
for name in wb.names:
self.my_fk.load_model(self.equation_name)
if self.equation_name == name.name:
model = xw.Range(self.equation_name)
self.my_fk.generate_model_graph(model)
if model is None:
return 'Model "%s" has not been loaded into cache, if named range exists check spelling.' % self.equation_name
def test_Equation_1(self):
"""First type of test for Equation_1"""
xw.books[self.workbook_name].sheets['Growing Degree Day'].activate()
goal = xw.books[self.workbook_name].sheets['Growing Degree Day'].range(xw.Range('D2'), xw.Range('D6')).options(array).value
tmin = xw.books[self.workbook_name].sheets['Growing Degree Day'].range(xw.Range('B2'), xw.Range('B6')).options(array).value
tmax = xw.books[self.workbook_name].sheets['Growing Degree Day'].range(xw.Range('C2'), xw.Range('C6')).options(array).value
inputs_for_DegreeDay = DataFrame({'T_min': tmin, 'T_max': tmax})
result = self.my_fk.evaluate_koala_model('Equation_1', inputs_for_DegreeDay).to_numpy()
assert_array_equal(goal, result)
def test_Equation_1_predefined_goal(self):
"""First type of test for Equation_1"""
goal = Series([0.0, 0.0, 0.0, 0.0, 0.0, 5, 10, 15, 20])
tmin = [-20, -15, -10, -5, 0, 5, 10, 15, 20]
tmax = [0, 5, 10, 15, 20, 25, 30, 35, 40]
inputs_for_DegreeDay = DataFrame({'T_min': tmin, 'T_max': tmax})
result = self.my_fk.evaluate_koala_model('Equation_1', inputs_for_DegreeDay)
assert_series_equal(goal, result)
def test_VBA_Equation_1(self):
"""
The function definition being called;
Function VBA_Equation_1(T_min As Double, T_max As Double) As Double
VBA_Equation_1 = Application.WorksheetFunction.Max(((T_max + T_min) / 2) - 10, 0)
End Function
"""
goal = 20
vba_equation_1 = xw.books[self.workbook_name].macro('VBA_Equation_1')
result = vba_equation_1(20.0, 40.0)
self.assertEqual(goal, result)

Empty outputs with python GDAL

Hello im new to Gdal and im struggling a with my codes. Everything seems to go well in my code mut the output bands at the end is empty. The no data value is set to 256 when i specify 255, so I don't really know whats wrong. Thanks any help will be appreciated!!!
Here is my code
from osgeo import gdal
from osgeo import gdalconst
from osgeo import osr
from osgeo import ogr
import numpy
#graticule
src_ds = gdal.Open("E:\\NFI_photo_plot\\photoplotdownloadAllCanada\\provincial_merge\\Aggregate\\graticule1.tif")
band = src_ds.GetRasterBand(1)
band.SetNoDataValue(0)
graticule = band.ReadAsArray()
print('graticule done')
band="none"
#Biomass
dataset1 = gdal.Open("E:\\NFI_photo_plot\\photoplotdownloadAllCanada\provincial_merge\\Aggregate\\Biomass_NFI.tif")
band1 = dataset1.GetRasterBand(1)
band1.SetNoDataValue(-1)
Biomass = band1.ReadAsArray()
maskbiomass = numpy.greater(Biomass, -1).astype(int)
print("biomass done")
Biomass="none"
band1="none"
dataset1="none"
#Baseline
dataset2 = gdal.Open("E:\\NFI_photo_plot\\Baseline\\TOTBM_250.tif")
band2 = dataset2.GetRasterBand(1)
band2.SetNoDataValue(0)
baseline = band2.ReadAsArray()
maskbaseline = numpy.greater(baseline, 0).astype(int)
print('baseline done')
baseline="none"
band2="none"
dataset2="none"
#sommation
biosource=(graticule+maskbiomass+maskbaseline)
biosource1=numpy.uint8(biosource)
biosource="none"
#Écriture
dst_file="E:\\NFI_photo_plot\\photoplotdownloadAllCanada\\provincial_merge\\Aggregate\\Biosource.tif"
dst_driver = gdal.GetDriverByName('GTiff')
dst_ds = dst_driver.Create(dst_file, src_ds.RasterXSize,
src_ds.RasterYSize, 1, gdal.GDT_Byte)
#projection
dst_ds.SetProjection( src_ds.GetProjection() )
dst_ds.SetGeoTransform( src_ds.GetGeoTransform() )
outband=dst_ds.GetRasterBand(1)
outband.WriteArray(biosource1,0,0)
outband.SetNoDataValue(255)
biosource="none"
graticule="none"
A few pointers:
Where you have ="none", these need to be = None to close/cleanup the objects, otherwise you are setting the objects to an array of characters: n o n e, which is not what you intend to do.
Why do you have band1.SetNoDataValue(-1), while other NoData values are 0? Is this data source signed or unsigned? If unsigned, then -1 doesn't exist.
When you open rasters with gdal.Open without the access option, it defaults to gdal.GA_ReadOnly, which means your subsequent SetNoDataValue calls do nothing. If you want to modify the dataset, you need to use gdal.GA_Update as your second parameter to gdal.Open.
Another strategy to create a new raster is to use driver.CreateCopy; see the tutorial for details.