Can't figure out how to group input settings into a preset so I don't have to keep changing the input settings on my indicator for each timeframe - input

I wrote a buy/sell indicator using PineScript and I have different values for each timeframe (moving average lengths, signal smoothing, period, etc). Basically, I wanna create a preset for each timeframe. Something like this:
preset = input.string(title='Preset', defval='Normal', options=['Normal', 'Sensitive', 'Very Sensitive'])
if preset == 'Normal'
length_ma1_normal := 50
length_ma2_normal := 200
plot(?)
if preset == 'Sensitive'
length_ma1_normal := 50
length_ma2_normal := 100
plot(?)
My buy/sell signals get triggered based on ema crossovers, macd, etc. Let's say I choose "Normal", I want the chart to plot labels based on 50/200 ema, "Sensitive" triggers 21/50 and so on. When I choose a different preset they all should plot only what the values for each preset are. Basically, the buy/sell signals are triggered based off 4 parameters and is it possible to plot all 4 (each preset has different values) for each preset?
I'm looking to do something like this:

Related

How to sort connection type into only 2 rows in Qlik sense

I have a column named Con_TYPE in which there are multiple types of connections such as fiberoptic, satellite, 3g etc.
And I want to sort them only into 2 rows:
fiberoptic
5
others
115
Can anybody help me?
Thanks in advance
You can use Calculated dimension or Mapping load
Lets imagine that the data, in its raw form, looks like this:
dimension: Con_TYPE
measure: Sum(value)
Calculated dimension
You can add expressions inside the dimension. If we have a simple if statement as an expression then the result is:
dimension: =if(Con_TYPE = 'fiberoptic', Con_TYPE, 'other')
measure: Sum(Value)
Mapping load
Mapping load is a script function so we'll have to change the script a bit:
// Define the mapping. In our case we want to map only one value:
// fiberoptic -> fiberoptic
// we just want "fiberoptic" to be shown the same "fiberoptic"
TypeMapping:
Mapping
Load * inline [
Old, New
fiberoptic, fiberoptic
];
RawData:
Load
Con_TYPE,
value,
// --> this is where the mapping is applied
// using the TypeMapping, defined above we are mapping the values
// in Con_TYPE field. The third parameter specifies what value
// should be given if the field value is not found in the
// mapping table. In our case we'll chose "Other"
ApplyMap('TypeMapping', Con_TYPE, 'Other') as Con_TYPE_Mapped
;
Load * inline [
Con_TYPE , value
fiberoptic, 10
satellite , 1
3g , 7
];
// No need to drop "TypeMapping" table since its defined with the
// "Mapping" prefix and Qlik will auto drop it at the end of the script
And we can use the new field Con_TYPE_Mapped in the ui. And the result is:
dimension: Con_TYPE_Mapped
measure: Sum(Value)
Pros/Cons
calculated dimension
+ easy to use
+ only UI change
- leads to performance issues on mid/large datasets
- have to be defined (copy/paste) per table/chart. Which might lead to complications if have to be changed across the whole app (it have to be changed in each object where defined)
mapping load
+ no performance issues (just another field)
+ the mapping table can be defined inline or loaded from an external source (excel, csv, db etc)
+ the new field can be used across the whole app and changing the values in the script will not require table/chart change
- requires reload if the mapping is changed
P.S. In both cases selecting Other in the tables will correctly filter the values and will show data only for 3g and satellite

How to make pie chart of these values in Splunk

Have the following query index=app (splunk_server_group=bex OR splunk_server_group=default) sourcetype=rpm-web* host=rpm-web* "CACHE_NAME=RATE_SHOPPER" method = GET | stats count(eval(searchmatch("true))) as Hit, count(eval(searchmatch("found=false"))) as Miss
Need to make a pie chart of two values "Hit and Miss rates"
The field where it is possible to distinguish the values is Message=[CACHE_NAME=RATE_SHOPPER some_other_strings method=GET found=false]. or found can be true
With out knowing the structure of your data it's harder to say what exactly you need todo but,
Pie charts is a single data series so you need to use a transforming command to generate a single series. PieChart Doc
if you have a field that denotes a hit or miss (You could use an Eval statement to create one if you don't already have this) you can use it to create the single series like this.
Lets say this field is called result.
|stats count by result
Here is a link to the documentation for the Eval Command
Good luck, hope you can get the results your looking for
Since you seem to be concerned only about whether "found" equals either "hit" or "miss", try this:
index=app (splunk_server_group=bex OR splunk_server_group=default) sourcetype=rpm-web* host=rpm-web* "CACHE_NAME=RATE_SHOPPER" method=GET found IN("hit","miss")
| stats count by found
Pie charts require a single field so it's not possible to graph the Hit and Miss fields in a pie. However, if the two fields are combined into one field with two possible values, then it will work.
index=app (splunk_server_group=bex OR splunk_server_group=default) sourcetype=rpm-web* host=rpm-web* "CACHE_NAME=RATE_SHOPPER" method = GET
| eval result=if(searchmatch("found=true"), "Hit", "Miss")
| stats count by result

Change order of categorical bars in Plotly parallel categories

I am trying to visualize changes in gene expression as categorical variables (up, down, no change) over various timepoints.
I have a dataframe describing differential expression data that looks like this:
data = {'gene':['Svm3G0018840','Svm5G0011050','Svm9G0059770'],
'01h': ['nc','up','down'], '04h': ['up', 'down', 'nc'],'08h':['nc','down','up']}
df=pd.DataFrame.from_dict(data)
df=df.set_index('gene')
I can use this df to create the parallel plot using the following code:
fig = px.parallel_categories(herbdf, dimensions=['01h', '04h', '08h','24h','48h'],
labels={'01h':'', '04h':'', '08h':'','24h':'','48h':''})
fig.show()
However, the categories (up, down, nc) are not always in the same order for every time point which makes the figure very difficult to read. I can change this in the interactive figure in a notebook, but I only have the option to output the corrected figure as a low quality png. I need the image in an svg format, which means I need to use the line:
fig.write_image("/figs/herb_de_pp.svg")
But when I add this line in the code block to save the figure I have no control of the order the categorical boxes end up in:
I have tried to add fig.update_ lines to solve this problem, such as:
fig.update_layout(xaxis={'categoryorder':'total descending'})
but this doesn't seem to change the output at all.
I could be missing something simple- any help would be much appreciated!
Parallel coordinates diagrams don't have xaxis/yaxis properties, you need to update traces in order to change the dimensions order:
dimensions = ['01h', '04h', '08h','24h','48h']
...
fig.update_traces(dimensions=[{"categoryorder": "category descending"} for _ in dimensions])
not great answer here, but something that I think will work in a pinch...
It looks like the order of the categories of each figure/column come from the order that they are in the original dataset. That is, in your first column, nc is the first unique item, then down is the second unique item, up is third.
So, if you can rearrange/sort your data so that the data shows up in the order you want it displayed, that should work.
Have your first row be nc | nc | nc | nc | nc, second row down | down | down | down | down, and third row up | up | up | up | up (assuming you actually have records like that). That should do it, but isn't very elegant...
Given the above solution, this is the line needed to sort the dataframe and produce the figure with ordered categories:
sorteddf = df.sort_values(by=['01h','04h','08h'], axis=0, ascending=False)

Odoo force a field higher than x

In sales order, there is a field discount %. Is it possible to ensure the users only input value lower than x and it will not accept any value higher than x.
You can achieve this with creating a function using #api.onchange('discount') decorator on python code to ensure discount value is not higher than x, if higher then set x to discount field and also possible to show warning popup from that function.
But if python code change is not prefered, you can also achieve this with Automated action where you create a new rule on sale.order.line, set trigger On form modification, action Execute python code and add the following code
if record.discount > 10:
record.discount = 10
Where 10 is the value of x. This will ensure discount is always less than x.
Op1: If you want to ensure that behaviour you can add an sql constraint to the database, this will work fine if your X value never change, something like this:
_sql_constraints = [
('discount_less_than_X', 'CHECK(discount < X)', 'The discount value should be less than X'),
]
This constraint will trigger in all cases(create, write, import, SQL insert).
Replace X with the value desired.
Op2: Use decorator api.constrains, get X value from somewherelse and apply the restriccion, something like this:
#api.constrains('discount')
def _check_discount(self):
# Create a key:value parameter in `ir.config_parameter` table before.
disscount_max_value = self.env['ir.config_parameter'].sudo().get_param('disscount_max_value')
for rec in self:
if rec.disscount > int(disscount_max_value):
raise ValidationError('The discount value should be less than {}'.format(disscount_max_value))
I hope this answer can be helful for you.

Sentinel 1 data gaps in swath overlap (not sequential scenes) in Google Earth Engine

I am working on a project using the Sentinel 1 GRD product in Google Earth Engine and I have found a couple examples of missing data, apparently in swath overlaps in the descending orbit. This is not the issue discussed here and explained on the GEE developers forum. It is a much larger gap and does not appear to be the product of the terrain correction as explained for that other issue.
This gap seems to persist regardless of year changes in the date range or polarization. The gap is resolved by changing the orbit filter param from 'DESCENDING' to 'ASCENDING', presumably because of the different swaths or by increasing the date range. I get that increasing the date range increases revisits and thus coverage but is this then just a byproduct of the orbital geometry? ie it takes more than the standard temporal repeat to image that area? I am just trying to understand where this data gap is coming from.
Code example:
var geometry = ee.Geometry.Polygon(
[[[-123.79472413785096, 46.20720039434629],
[-123.79472413785096, 42.40398120362418],
[-117.19194093472596, 42.40398120362418],
[-117.19194093472596, 46.20720039434629]]], null, false)
var filtered = ee.ImageCollection('COPERNICUS/S1_GRD').filterDate('2019-01-01','2019-04-30')
.filterBounds(geometry)
.filter(ee.Filter.eq('orbitProperties_pass', 'DESCENDING'))
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VH'))
.filter(ee.Filter.listContains('transmitterReceiverPolarisation', 'VV'))
.filter(ee.Filter.eq('instrumentMode', 'IW'))
.select(["VV","VH"])
print(filtered)
var filtered_mean = filtered.mean()
print(filtered_mean)
Map.addLayer(filtered_mean.select('VH'),{min:-25,max:1},'filtered')
You can view an example here: https://code.earthengine.google.com/26556660c352fb25b98ac80667298959