How to set multisample renderpass with depth attachment? - vulkan

Firstly i have taking no error from validation layers and multisample is working.
But when i copy depth resolved image , I understand that it is empty.
When copy depth msaa image , I saw that it is not empty.( but as expected validation layers gave error because i copied VK_SAMPLE_COUNT_4_BIT to VK_SAMPLE_COUNT_1_BIT but still worked interesting . This is how I found out that there is no problem in depth msaa image)
Therefore here should be depth resolving problem , where i am making mistake :
VkAttachmentDescription colorAttachment_MSAA{};
colorAttachment_MSAA.samples = VK_SAMPLE_COUNT_4_BIT;
...
VkAttachmentDescription colorAttachment_Resolve{};
colorAttachment_Resolve.samples = VK_SAMPLE_COUNT_1_BIT;
...
VkAttachmentDescription depthAttachment_MSAA{};
depthAttachment_MSAA.samples = VK_SAMPLE_COUNT_4_BIT;
...
VkAttachmentDescription depthAttachment_Resolve{};
depthAttachment_Resolve.samples = VK_SAMPLE_COUNT_1_BIT;
depthAttachment_Resolve.loadOp = VK_ATTACHMENT_LOAD_OP_DONT_CARE;
depthAttachment_Resolve.storeOp = VK_ATTACHMENT_STORE_OP_STORE;
depthAttachment_Resolve.initialLayout = VK_IMAGE_LAYOUT_UNDEFINED;
depthAttachment_Resolve.finalLayout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
...
VkAttachmentReference colorAttachment_MSAA_Ref{};
colorAttachment_MSAA_Ref.attachment = 0;
colorAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
VkAttachmentReference depthAttachment_MSAA_Ref{};
depthAttachment_MSAA_Ref.attachment = 1;
depthAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
VkAttachmentReference colorAttachment_Resolve_Ref{};
colorAttachment_MSAA_Ref.attachment = 2;
colorAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL;
//i created depthAttachment_Resolve_Ref but I couldn't find a place to use.
VkAttachmentReference depthAttachment_Resolve_Ref{};
depthAttachment_MSAA_Ref.attachment = 3;
depthAttachment_MSAA_Ref.layout = VK_IMAGE_LAYOUT_DEPTH_STENCIL_ATTACHMENT_OPTIMAL;
VkSubpassDescription subpass{};
subpass.pipelineBindPoint = VK_PIPELINE_BIND_POINT_GRAPHICS;
subpass.colorAttachmentCount = 1;
subpass.pColorAttachments = &colorAttachment_MSAA_Ref;
subpass.pDepthStencilAttachment = &depthAttachment_MSAA_Ref;
subpass.pResolveAttachments = &colorAttachment_Resolve_Ref;
VkSubpassDependency dependency{};
dependency.srcSubpass = VK_SUBPASS_EXTERNAL;
dependency.dstSubpass = 0;
dependency.srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT;
dependency.srcAccessMask = 0;
dependency.dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT | VK_PIPELINE_STAGE_EARLY_FRAGMENT_TESTS_BIT;
dependency.dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT | VK_ACCESS_DEPTH_STENCIL_ATTACHMENT_WRITE_BIT;
array<VkAttachmentDescription, 4> attachments = { colorAttachment_MSAA , depthAttachment_MSAA,
colorAttachment_Resolve, depthAttachment_Resolve};
VkRenderPassCreateInfo renderPassInfo{};
renderPassInfo.sType = VK_STRUCTURE_TYPE_RENDER_PASS_CREATE_INFO;
renderPassInfo.attachmentCount = static_cast<uint32_t>(attachments.size());
renderPassInfo.pAttachments = attachments.data();
renderPassInfo.subpassCount = 1;
renderPassInfo.pSubpasses = &subpass;
renderPassInfo.dependencyCount = 1;
renderPassInfo.pDependencies = &dependency;

Resolving depth is not possible in vanilla Vulkan. It requires the VK_KHR_depth_stencil_resolve extension, resp. Vulkan 1.2. Which adds VkSubpassDescriptionDepthStencilResolve::pDepthStencilResolveAttachment to the API.

Related

How to optimize for a variable that goes into the argument of a function in pyomo?

I am trying to code a first order plus dead time (FOPDT) model and use it
for PID tuning. The inspiration for the work is the scipy code from: https://apmonitor.com/pdc/index.php/Main/FirstOrderOptimization
When I use model.Thetam() in the ODE constraint, it does not optimize Thetam,
keeps it at the initial value. When I use only model.Theta then the code throws an error -
ValueError: object arrays are not supported if I remove it from uf argument i.e.model.Km * (uf(tt - model.Thetam)-model.U0))
and if I remove it from the if statement (if tt > model.Thetam), then the error is - ERROR:pyomo.core:Rule failed when generating expression for Constraint ode with index 0.0: PyomoException: Cannot convert non-constant Pyomo expression (Thetam < 0.0) to bool. This error is usually caused by using a Var, unit, or mutable Param in a Boolean context such as an "if" statement, or when checking container membership or equality.
Code:
`url = 'http://apmonitor.com/pdc/uploads/Main/data_fopdt.txt'
data = pd.read_csv(url)
data = data.iloc[1:]
t = data['time'].values - data['time'].values[0]
u = data['u'].values
yp = data['y'].values
u0 = u[0]
yp0 = yp[0]
yf = interp1d(t, yp)
# specify number of steps
ns = len(t)
delta_t = t[1]-t[0]
# create linear interpolation of the u data versus time
uf = interp1d(t,u,fill_value="extrapolate")
model = ConcreteModel()
model.T = ContinuousSet(initialize = t)
model.Y = Var(model.T)
model.dYdT = DerivativeVar(model.Y, wrt = (model.T))
model.Y[0].fix(yp0)
model.Yp0 = Param(initialize = yp0)
model.U0 = Param(initialize = u0)
model.Km = Var(initialize = 2, bounds = (0.1, 10))
model.Taum = Var(initialize = 3, bounds = (0.1, 10))
model.Thetam = Var(initialize = 0, bounds = (0, 10))
model.ode = Constraint(model.T,
rule = lambda model, tt: model.dYdT[tt] == (-(model.Y[tt]-model.Yp0) + model.Km * (uf(tt - model.Thetam())-model.U0))/model.Taum if tt > model.Thetam()
else model.dYdT[tt] == -(model.Y[tt]-model.Yp0)/model.Taum)
def obj_rule(m):
return sum((m.Y[i] - yf(i))**2 for i in m.T)
model.obj = Objective(rule = obj_rule)
discretizer = TransformationFactory('dae.finite_difference')
discretizer.apply_to(model, nfe = 500, wrt = model.T, scheme = 'BACKWARD')
opt=SolverFactory('ipopt', executable='/content/ipopt')
opt.solve(model)#, tee = True)
model.pprint()
model2 = ConcreteModel()
model2.T = ContinuousSet(initialize = t)
model2.Y = Var(model2.T)
model2.dYdT = DerivativeVar(model2.Y, wrt = (model2.T))
model2.Y[0].fix(yp0)
model2.Yp0 = Param(initialize = yp0)
model2.U0 = Param(initialize = u0)
model2.Km = Param(initialize = 3.0145871)#3.2648)
model2.Taum = Param(initialize = 1.85862177) # 5.2328)
model2.Thetam = Param(initialize = 0)#2.936839032) #0.1)
model2.ode = Constraint(model2.T,
rule = lambda model, tt: model.dYdT[tt] == (-(model.Y[tt]-model.Yp0) + model.Km * (uf(tt - model.Thetam())-model.U0))/model.Taum)
discretizer2 = TransformationFactory('dae.finite_difference')
discretizer2.apply_to(model2, nfe = 500, wrt = model2.T, scheme = 'BACKWARD')
opt2=SolverFactory('ipopt', executable='/content/ipopt')
opt2.solve(model2)#, tee = True)
# model.pprint()
t = [i for i in model.T]
ypred = [model.Y[i]() for i in model.T]
ytrue = [yf(i) for i in model.T]
yoptim = [model2.Y[i]() for i in model2.T]
plt.plot(t, ypred, 'r-')
plt.plot(t, ytrue)
plt.plot(t, yoptim)
plt.legend(['pred', 'true', 'optim'])
`

Spacy v3 - ValueError: [E030] Sentence boundaries unset

I'm training an entity linker model with spacy 3, and am getting the following error when running spacy train:
ValueError: [E030] Sentence boundaries unset. You can add the 'sentencizer' component to the pipeline with: nlp.add_pipe('sentencizer'). Alternatively, add the dependency parser or sentence recognizer, or set sentence boundaries by setting doc[i].is_sent_start. .
I've tried with both transformer and tok2vec pipelines, it seems to be failing on this line:
File "/usr/local/lib/python3.7/dist-packages/spacy/pipeline/entity_linker.py", line 252, in update sentences = [s for s in eg.reference.sents]
Running spacy debug data shows no errors.
I'm using the following config, before filling it in with spacy init fill-config:
[paths]
train = null
dev = null
kb = "./kb"
[system]
gpu_allocator = "pytorch"
[nlp]
lang = "en"
pipeline = ["transformer","parser","sentencizer","ner", "entity_linker"]
batch_size = 128
[components]
[components.transformer]
factory = "transformer"
[components.transformer.model]
#architectures = "spacy-transformers.TransformerModel.v3"
name = "roberta-base"
tokenizer_config = {"use_fast": true}
[components.transformer.model.get_spans]
#span_getters = "spacy-transformers.strided_spans.v1"
window = 128
stride = 96
[components.sentencizer]
factory = "sentencizer"
punct_chars = null
[components.entity_linker]
factory = "entity_linker"
entity_vector_length = 64
get_candidates = {"#misc":"spacy.CandidateGenerator.v1"}
incl_context = true
incl_prior = true
labels_discard = []
[components.entity_linker.model]
#architectures = "spacy.EntityLinker.v1"
nO = null
[components.entity_linker.model.tok2vec]
#architectures = "spacy.HashEmbedCNN.v1"
pretrained_vectors = null
width = 96
depth = 2
embed_size = 2000
window_size = 1
maxout_pieces = 3
subword_features = true
[components.parser]
factory = "parser"
[components.parser.model]
#architectures = "spacy.TransitionBasedParser.v2"
state_type = "parser"
extra_state_tokens = false
hidden_width = 128
maxout_pieces = 3
use_upper = false
nO = null
[components.parser.model.tok2vec]
#architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
[components.parser.model.tok2vec.pooling]
#layers = "reduce_mean.v1"
[components.ner]
factory = "ner"
[components.ner.model]
#architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = false
nO = null
[components.ner.model.tok2vec]
#architectures = "spacy-transformers.TransformerListener.v1"
grad_factor = 1.0
[components.ner.model.tok2vec.pooling]
#layers = "reduce_mean.v1"
[corpora]
[corpora.train]
#readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 0
[corpora.dev]
#readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
[training]
accumulate_gradient = 3
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
[training.optimizer]
#optimizers = "Adam.v1"
[training.optimizer.learn_rate]
#schedules = "warmup_linear.v1"
warmup_steps = 250
total_steps = 20000
initial_rate = 5e-5
[training.batcher]
#batchers = "spacy.batch_by_padded.v1"
discard_oversize = true
size = 2000
buffer = 256
[initialize]
vectors = ${paths.vectors}
[initialize.components]
[initialize.components.sentencizer]
[initialize.components.entity_linker]
[initialize.components.entity_linker.kb_loader]
#misc = "spacy.KBFromFile.v1"
kb_path = ${paths.kb}
I can write a script to add the sentence boundaries in manually to the docs, but am wondering why the sentencizer component is not doing this for me, is there something missing in the config?
You haven't put the sentencizer in annotating_components, so the updates it makes aren't visible to other components during training. Take a look at the relevant section in the docs.

Appending tables generated from a loop

I am a new python user here and am trying to append data together that I have pulled from a pdf using Camelot but am having trouble getting them to join together.
Here is my code:
url = 'https://www.fhfa.gov/DataTools/Downloads/Documents/HPI/HPI_AT_Tables.pdf'
tables = camelot.read_pdf(url,flavor='stream', edge_tol = 500, pages = '1-end')
i = 0
while i in range(0,tables.n):
header = tables[i].df.index[tables[i].df.iloc[:,0]=='Metropolitan Statistical Area'].to_list()
header = str(header)[1:-1]
header = (int(header))
tables[i].df = tables[i].df.rename(columns = tables[i].df.iloc[header])
tables[i].df = tables[i].df.drop(columns = {'': 'Blank'})
print(tables[i].df)
#appended_data.append(tables[i].df)
#if i > 0:
# dfs = tables[i-1].append(tables[i], ignore_index = True)
#pass
i = i + 1
any help would be much appreciated
You can use pandas.concat() to concat a list of dataframe.
while i in range(0,tables.n):
header = tables[i].df.index[tables[i].df.iloc[:,0]=='Metropolitan Statistical Area'].to_list()
header = str(header)[1:-1]
header = (int(header))
tables[i].df = tables[i].df.rename(columns = tables[i].df.iloc[header])
tables[i].df = tables[i].df.drop(columns = {'': 'Blank'})
df_ = pd.concat([table.df for table in tables])

square root and powers on numpy arrays

I need help with square root and powers. I am calculating the mean errors and I have been told to do this for each element:
dmgfeerr = sqrt(dmgherr**2 - dfeherr**2).
but I get an error message: TypeError: only size-1 arrays can be converted to Python scalars.
Below is the data. Its something called quadrature error :
dfeh = d[1].data['Fe_H_2']
dmgh = d[1].data['MG_H']
dnh = d[1].data['N_H']
dch = d[1].data['C_H']
dalh = d[1].data['AL_H']
dmnh = d[1].data['MN_H']
dcfe = (dch) - (dfeh)
dnfe = (dnh) - (dfeh)
dalfe = (dalh) - (dfeh)
dmnfe = (dmnh) -(dfeh)
dmgfe = (dmgh) - (dfeh)
dfeherr = d[1].data['FE_H_ERR_2']
dmgherr = d[1].data['MG_H_ERR']
dalherr = d[1].data['AL_H_ERR']
dcherr = d[1].data['C_H_ERR']
dnherr = d[1].data['N_H_ERR']
dmnherr =d[1].data['MN_H_ERR']
dcfeerr = dcherr - dfeherr
dnfeerr = dnherr - dfeherr
dalfeerr = dalherr - dfeherr
dmnfeerr = dmnherr - dfeherr
dmgfeerr = dmgherr - dfeherr

Generating a value in a step function based on a variable

Im creating an optimization model using gurobi and have some trouble with one of my constraints. The constraint is used to establish the quantity and is based on supply and demand curves. The supply curves cause the problems as it is a step curve. As seen in the code, the problem is when im writing the def MC section.
Demand_Curve1_const = 250
Demand_Curve1_slope = -0.025
MC_water = 0
MC_gas = 80
MC_coal = 100
CAP_water = 5000
CAP_gas = 2500
CAP_coal = 2000
model = pyo.ConcreteModel()
model.Const_P1 = pyo.Param(initialize = Demand_Curve1_const)
model.slope_P1 = pyo.Param(initialize = Demand_Curve1_slope)
model.MCW = pyo.Param(initialize = MC_water)
model.MCG = pyo.Param(initialize = MC_gas)
model.MCC = pyo.Param(initialize = MC_coal)
model.CW = pyo.Param(initialize = CAP_water)
model.CG = pyo.Param(initialize = CAP_gas)
model.CC = pyo.Param(initialize = CAP_coal)
model.qw = pyo.Var(within = pyo.NonNegativeReals)
model.qg = pyo.Var(within = pyo.NonNegativeReals)
model.qc = pyo.Var(within = pyo.NonNegativeReals)
model.d = pyo.Var(within = pyo.NonNegativeReals)
def MC():
if model.d <=5000:
return model.MCW
if model.d >= 5000 and model.d <= 7500:
return model.MCG
if model.d >= 7500 :
return model.MCC
def Objective(model):
return(model.Const_P1*model.d + model.slope_P1*model.d*model.d - (model.MCW*model.qw + model.MCG*model.qg + model.MCC*model.qc))
model.OBJ = pyo.Objective(rule = Objective, sense = pyo.maximize)
def P1inflow(model):
return(MC == model.Const_P1+model.slope_P1*model.d*2)
model.C1 = pyo.Constraint(rule = P1inflow)
Your function MC as stated would make the model nonlinear, and in a rather nasty way (discontinuous).
Piecewise linear functions are often modeled through binary variables or SOS2 sets (Special Ordered Sets of type 2). As you are using Pyomo, you can also use a tool that can generate MIP formulations automatically for you. See help(Piecewise).
An example that fits your description is here.