Using Prebid's price granularity high - header

We have our Price Granularity for Prebid set to high. However, since it's capped at $20, if we get a bid for $30 or $40 we're unable to accept it.
How can we stick with Price Granularity high with Prebid, but in instances we have a bid north of $20, automatically round down to $20 so that we can accept the bid.
Thank you..

According to Prebid documentation is should already be rounded down, but you can explicitly control it by adding the following code.
Assuming you're using the standard Prebid targeting keys
pbjs.bidderSettings.standard = {
adserverTargeting: [{
key: 'hb_bidder',
val: function val(bidResponse) {
return bidResponse.bidderCode;
}
}, {
key: 'hb_adid',
val: function val(bidResponse) {
return bidResponse.adId;
}
}, {
key: 'hb_pb',
val: function val(bidResponse) {
var cpm = bidResponse.cpm;
if (cpm > 20.00) {
return 20.00;
}
return (Math.floor(cpm * 100) / 100).toFixed(2);
}
}]
};
Add this after pbjs.addAdUnits( ad_units )

You’ll want to use the custom setting instead of using high to get what you want
See an example here: http://prebid.org/dev-docs/publisher-api-reference.html#customCPMObject
For the JSON youll want to redefine the same granularity settings for high in that JSON object and then add upon it to get additional granularity > $20.
Pro tips:
if you do not define values between $0-$x.xx that can act as a floor if you never want bids lower than x going to DFP
DFP caps the number of line items in an order at 450, so you may need an additional order(s) for your new granularity
always overlap the settings. Eg. Bucket 1 = $0-$3.00. Bucket 2 should start at $3.00-x (if you dont, a $3.009 bid wont be passed in to DFP

Related

Pine Script TradingView Query

I'm quite new to PINE and I'm hoping for some assistance, or to be pointed in the right direction. My goal is to script BUY/SELL signals which will use a PERCENTAGE of an available sum for use with a trading bot via TradingView.
For example, starting capital $100, buy signal comes through, which triggers buy of of 25% ($25) worth of X coin.
I have been unable to find the code to setup the more complicated percentage-based buy/sell orders. Perhaps I'm blind, if so, tell me! :) The below code provided by TradingView would use all available capital in the account to make 1 x buy trade... which is not ideal.
{
"action": "BUY",
"botId": "1234"
}
I have tried this to add % but to no avail:
{
"action": "BUY" percent_of_equity 25,
"botId": "1234"
}
Any suggestions? Am I missing anything obvious?
Related Articles:
Quadency
TradingView
The code above is not related to pine-script, but is part of alarm/webhook action.
If you want to declare equity in your (back)order, inside pine-script, you have to use strategy like something this:
strategy(title = "example",
shorttitle = "myscript",
overlay = true,
pyramiding = 1,
default_qty_type = strategy.percent_of_equity,
default_qty_value = 50,
initial_capital = 1000)
This get the type as percent of equity, which is 50% declared on qty_value

InfluxDB 1.8 schema design for industrial application?

I have node-red-S7PLC link pushing the following data to InfluxDB at 1.5 second cycle.
msg.payload = {
name: 'PLCTEST',
level1_m: msg.payload.a90, "value payload from PLC passed to influx"
power1: msg.payload.a93,
valvepos_%: msg.payload.a107,
temp1: msg.payload.a111,
washer_acidity: msg.payload.a113,
etc.
}
return msg;
In total 130 individual data points consisting of binary states like alarms and button presses and measurements (temp, pressure, flow...)
This has been running a week now as a stress test for DB writes. Writing seems to be fine but I have noticed that if i swap from 10 temperature measurements with 30min query window to 3hr query in Grafana dashboard the load times are starting to get annoyingly long. 12hr window is a no go. This i assume is because all my things are pushed as fieldkeys and field values. Without indexes this is straining the database.
Grafana query inspector gives me 1081 rows per measurement_query so x10 = 10810 rows/dasboard_query. But the whole pool influx has to go through is 130 measurements x 1081 = 140530 rows / 3hr window.
I would like to get a few pointers on how to optimize the schema. I have the following in mind.
DB: Aplication_nameX
Measurement: Process_metrics,
Tags: Temp,press,flow,%,Level,acidity, Power
Tag_values: CT-xx1...CT-xxn, CP-xx1...CP-xxn, CF-xx1...CF-xxn,....
Fieldkey= Value, fieldvalue= value
Measurement: Alarms_On,
Fieldkey= State, fieldvalue= "trues", "false"
Measurement:Binary_ON
Fieldkey: State, fieldvalue= "trues", "false"
This would then be in node-red for few temps (i think):
msg.payload = [{
Value: msg.payload.xxx, "value payload from PLC passed to influx"
Value: msg.payload.xxx,
Value: msg.payload.xxx
},
{
Temp:"CT_xx1",
Temp:"CT_xx2",
Temp:"CT_xx2"
}];
return msg;
EDIT: Following Roberts comments.
I read the influx manuals for a week and other samples online before writing here. Some how influx is just different and unique enough from normal SQL mind set that i do find this unusually difficult. But i did have a few moments of clarity over the weekend.
I think the following would be more appropriate.
DB: Station_name
measurements: Process_metrics,Alarms, Binary.
Tags: "SI_metric"
Values= "Temperature", "Pressure" etc.
Fieldkey: "proces_position"= CT/P/F_xxx.
values= process_values
This should prevent the cardinality going bonkers vs. my original thought.
I think alarms and binary can be left as fieldkey/fieldvalue only and separating them to own measurements should give enough filtering. These are also logged only at state change thus a lot less input to the database than analogs at 1s cycle.
Following my original node-red flow code this would translate to batch output function:
msg.payload = [
{
measurement: "Process_metrics",
fields: {
CT_xx1: msg.payload.xxx,
CT_xx2: msg.payload.xxx,
CT_xx3: msg.payload.xxx
},
tags:{
metric:"temperature"
},
{
measurement: "Process_metrics",
fields: {
CP_xx1: msg.payload.xxx,
CP_xx2: msg.payload.xxx,
CP_xx3: msg.payload.xxx
},
tags:{
metric:"pressure"
},
{
measurement: "Process_metrics",
fields: {
CF_xx1: msg.payload.xxx,
CF_xx2: msg.payload.xxx,
CF_xx3: msg.payload.xxx
},
tags:{
metric:"flow"
},
{
measurement: "Process_metrics",
fields: {
AP_xx1: msg.payload.xxx,
AP_xx2: msg.payload.xxx,
AP_xx3: msg.payload.xxx
},
tags:{
metric:"Pumps"
},
{
measurement: "Binary_states",
fields: {
Binary1: msg.payload.xxx,
Binary2: msg.payload.xxx,
Binary3: msg.payload.xxx
},
{
measurement: "Alarms",
fields: {
Alarm1: msg.payload.xxx,
Alarm2: msg.payload.xxx,
Alarm3: msg.payload.xxx
}
];
return msg;
EDIT 2:
Final thoughts after testing my above idea and refining it further.
My second idea did not work as intended. The final step with Grafana variables did not work as the process data had info needed in fields and not as tags. This made the Grafana side annoying with rexec queries to get the plc tag names info from fields to link to grafana variable drop down lists. Thus again running resource intensive field queries.
I stumbled on a blog post on the matter of how to get your mind straight with TSDB and the above idea is still too SQL like approach to data with TSDB. I refined the DB structure some more and i seem to have found a compromise with coding time in different steps (PLC->NodeRed->influxDB->Grafana) and query load on the database. From 1gb ram usage when stressing with write and query to 100-300MB in normal usage test.
Currently in testing:
Python script to crunch the PLC side tags and descriptions from csv to a copypastable format for Node-Red. Example for extracting temperature measurements from the csv and formating to nodered.
import pandas as pd
from pathlib import Path
file1 = r'C:\\Users\\....pandastestcsv.csv
df1 = pd.read_csv(file1, sep=';')
dfCT= df1[df1['POS'].str.contains('CT', regex=False, na=False)]
def my_functionCT(x,y):
print( "{measurement:"+'"temperature",'+"fields:{value:msg.payload."+ x +",},tags:{CT:\"" + y +'\",},},' )
result = [my_functionCT(x, y) for x, y in zip(dfCT['ID'], dfCT['POS'])]
Output of this is all the temperature measurements CT from the CSV. {measurement:"temperature",fields:{value:msg.payload.a1,},tags:{CT:"tag description with process position CT_310",},},
This list can be copypasted to Node-Red datalink payload to influxDB.
InfluxDB:
database: PLCTEST
Measurements: temperature, pressure, flow, pumps, valves, Alarms, on_off....
tag-keys: CT,CP,CF,misc_mes....
tag-field: "PLC description of the tag"
Field-key: value
field-value: "process measurement value from PLC payload"
This keeps the cardinality per measurement in check within reason and queries can be better targeted to relevant data without running through the whole DB. Ram and CPU loads are now minor and jumping from 1h to 12h query in Grafana loads in seconds without lock ups.
While designing InfluxDB measurement schema we need to be very careful on selecting the tags and fields.
Each tag value will create separate series and as the number of tag values increase the memory requirement of InfluxDB server will increase exponentially.
From the description of the measurement given in the question, I can see that you are keeping high cardinality values like temperature, pressure etc as tag values. These values should be kept as field instead.
By keeping these values as tags, influxdb will index those values for faster search. For each tag value a separate series will be created. As the number of tag values increase, the number of series also will increase leading to Out of Memory errors.
Quoting from InfluxDB documentation.
Tags containing highly variable information like UUIDs, hashes, and
random strings lead to a large number of series in the database, also
known as high series cardinality. High series cardinality is a primary
driver of high memory usage for many database workloads.
Please refer the influxDB documentation for designing schema for more details.
https://docs.influxdata.com/influxdb/v1.8/concepts/schema_and_data_layout/

How to get the document total number of pages and current page numeric index in epubjs library?

I'm working on an app that is a book-reader and it is developed using epubjs-rn. I want to know how I Can get the book total number of pages and the numeric index of the current page. I will be grateful if somebody teach me this.
This is not super clear cut, the concept of "page numbers" in the tradition sense does not really work. But what we do have is the total number of locations.
For the prop onLocationsReady record the total number of locations.
onLocationsReady={(locations) => {
this.setState({totalNumberOfLocations: locations.total});
})
Then looking at the onLocationChange prop
onLocationChange={(visibleLocation) => {
this.setState({visibleLocation});
});
Then what you can do is have all the necessary information to computes the rough "page" or location number and percentage:
// give the current "page" or location
_formatCurrentPosition(){
return Math.floor(this.state.totalNumberOfLocations * (this.state.visibleLocation.start.percentage.toFixed(4)));
}
// formats the percentage since this can be very long
_formatProgressPercentage(){
return Math.floor(this.state.visibleLocation.start.percentage.toFixed(4) * 100);
}
Note
On mount visibleLocation will be unknown so you will need to make sure the visible location and total number of locations is available prior to rendering

Hotcakes access variant prices via SingleProductViewModel

From the SingleProductViewModel, what is the best way to access prices for the variants associated with the product? From the documentation page linked above, I see that SingleProductViewModel contains a Product object, but I'm not sure how to use that to get prices of variants. (I can't find a listing of properties for the Product object).
Here is my specific use case: I have a Hotcakes Category Viewer and I'd like each product listed to display the range of prices for all variants of that product, rather than just the price for the main product. For example, a fedora product would display price as "$10 - $30" if the product contained variants with prices of $10, $20, and $30. I happen to be using the "simple" view of the category viewer, so am expecting to implement this in _RenderSingleProductSimple.cshtml, however I'm interested in using this for other category views, too.
Thanks in advance.
From what I've seen, most people will change their viewset to say something like "Starting at [PRICE]" or "As Low As [PRICE]" when there is a variant detected.
If you'd like to show the full range of prices, this can be done too, but you should know that depending on how many products that have variants and how many variants overall in the view, this could result in a negative performance impact on the site. How much impact is seen could range from negligible to undesirable.
The documentation you mentioned includes information about the Item property of the SingleProductViewModel class. This property includes all of the variant information you'd be looking for.
So, what you could do is use the Item.HasVariants property to determine if you need to have a different label. If that returns true, you can then iterate through the Item.Variants property to get all of the prices and find the lowest and highest ones to display.
Thanks #Will Strohl, that is helpful information.
I've put together the following code which seems to be achieving the original aim. Note that I said "variants" in the question, and these are product variants in our implementation, however we are achieving price adjustments for the variants via product options, so the code below looks at Model.Item.Options rather than Model.Item.Variants. Also, regarding price, I ignored user price details that weren't relevant to our implementation, and so used Model.Item.ListPrice rather than Model.UserPrice.DisplayPrice.
<div class="hc-recprice">
#{
string priceToDisplay = "";
if (Model.Item.HasOptions()){
decimal minPrice = Decimal.MaxValue;
decimal maxPrice = Decimal.MinValue;
decimal oiPrice = 0;
Hotcakes.Commerce.Catalog.OptionList options = Model.Item.Options;
foreach (Hotcakes.Commerce.Catalog.Option o in options){
foreach (Hotcakes.Commerce.Catalog.OptionItem oi in o.Items){
oiPrice = Model.Item.ListPrice + oi.PriceAdjustment;
if (oiPrice < minPrice) {
minPrice = oiPrice;
}
if (oiPrice > maxPrice) {
maxPrice = oiPrice;
}
}
}
if(minPrice == maxPrice){
priceToDisplay = string.Format("{0:C0}", minPrice);
} else {
priceToDisplay = string.Format("{0:C0}", minPrice) + " - " + string.Format("{0:C0}", maxPrice);
}
} else {
priceToDisplay = string.Format("{0:C0}", Model.Item.ListPrice);
}
#Html.Raw(priceToDisplay)
}
</div>

dijit filteringSelect with min length

I can't seem to find a way to require the filtering select input to be of a certain length. I've tried like this:
new dijit.form.FilteringSelect({
'name': 'bla',
'store': jsonRestStore,
'searchAttr': "name",
'pattern': '.{3,}',
'regExp': '.{3,}'
});
but it doesn't change a thing. I want the filtering select to only query the store, if at least 3 characters have been entered. Can't be that exotic a requirement, can it? There are thousands of items behind that store, so querying that with just 1 or 2 characters is slow.
I did a bit more searching and found this post on the dojo mailing list. To summarize, there is no way to native support in the FilteringSelect for it, but it is extremely easy to implement.
// custom min input character count to trigger search
minKeyCount: 3,
// override search method, count the input length
_startSearch: function (/*String*/key) {
if (!key || key.length < this.minKeyCount) {
this.closeDropDown();
return;
}
this.inherited(arguments);
}
Also in the API Docs, there is a searchDelay attribute, which could be helpful in minimizes the number of queries.
searchDelay
Delay in milliseconds between when user types something and we start searching based on that value