scrapy middleware: what the number means (eg. ProxyMiddleware': 410)? - scrapy

Sorry for the very basic question but what the 410 means in myproject.middlewares.ProxyMiddleware': 410 ? (It's so obvious that nobody talk about it!).
RandomUserAgentMiddleware': 400
HttpProxyMiddleware': 110
ProxyMiddleware': 100
I did not find anything about it in the tuto.
EDIT It's not a duplicate from this : the answers says the number is use to sort the order but doesnt explain why they use a specific number. Why in my example above RandomUserAgentMiddleware use 400, why not 399, or 401, is there a reason for that? Or should we roughly take any number that fit in the order?

The number could be roughly any number which fits the order and also gives you flexibility to fit some other middleware in between.
So you use 100, 200, 300, ... instead of 1, 2, 3,... which gives you more flexibility when adding middlewares in between the existing middlewares. Final the middlewares will be sorted by this number and executed in order. So
{
"A": 200,
"B" : 400,
"C" : 300
}
is equivalent to
{
"C" : 200
"A": 100,
"B" : 400,
}
Both would execute middleware in order A, C, B

Related

Pine Script TradingView Query

I'm quite new to PINE and I'm hoping for some assistance, or to be pointed in the right direction. My goal is to script BUY/SELL signals which will use a PERCENTAGE of an available sum for use with a trading bot via TradingView.
For example, starting capital $100, buy signal comes through, which triggers buy of of 25% ($25) worth of X coin.
I have been unable to find the code to setup the more complicated percentage-based buy/sell orders. Perhaps I'm blind, if so, tell me! :) The below code provided by TradingView would use all available capital in the account to make 1 x buy trade... which is not ideal.
{
"action": "BUY",
"botId": "1234"
}
I have tried this to add % but to no avail:
{
"action": "BUY" percent_of_equity 25,
"botId": "1234"
}
Any suggestions? Am I missing anything obvious?
Related Articles:
Quadency
TradingView
The code above is not related to pine-script, but is part of alarm/webhook action.
If you want to declare equity in your (back)order, inside pine-script, you have to use strategy like something this:
strategy(title = "example",
shorttitle = "myscript",
overlay = true,
pyramiding = 1,
default_qty_type = strategy.percent_of_equity,
default_qty_value = 50,
initial_capital = 1000)
This get the type as percent of equity, which is 50% declared on qty_value

Generate random numbers that only occur once in JMeter

I want to generate an array of random numbers that only occur once for multiple inputs in JMeter. For example for a range of 1-100:
"age": ${__Random(1,101)},
"weight": ${__Random(1,101)},
"height": ${__Random(1,101)}
There is a chance that two of the variables will have the same value, how could I avoid such incident?
For unique random number you will need to add JSR223 Sampler using ThreadLocalRandom with the following code
import java.util.concurrent.ThreadLocalRandom;
int[] array = ThreadLocalRandom.current().ints(0, 100).distinct().limit(3).toArray();
vars.put("age", String.valueOf(array[0]));
vars.put("weight", String.valueOf(array[1]));
vars.put("height", String.valueOf(array[2]));
And then call the parameters in request:
"age": ${age},
"weight": ${weight},
"height": ${height}
SuperQA${__Random(4,ABCDEFGHIJKLMNOPQRSTUVWXYZ999999999999)}#gmail.com
Here
4: the count of the generating random number

MongoDB using $and with slice and a match

I'm using #Query from the spring data package and I want to query on the last element of an array in a document.
For example the data structure could be like this:
{
name : 'John',
scores: [10, 12, 14, 16]
},
{
name : 'Mary',
scores: [78, 20, 14]
},
So I've built a query, however it is complaining that "error message 'unknown operator: $slice' on server"
The $slice part of the query, when run separately, is fine:
db.getCollection('users').find({}, {scores: { $slice: -1 })
However as soon as I combine it with a more complex check, it gives the error as mentioned.
db.getCollection('users').find{{"$and":[{ } , {"scores" : { "$slice" : -1}} ,{"scores": "16"}]})
This query would return the list of users who had a last score of 16, in my example John would be returned but not Mary.
I've put it into a standard mongo query (to debug things), however ideally I need it to go into a spring-data #query construct - they should be fairly similar.
Is there anyway of doing this, without resorting to hand-cranked java calls? I don't see much documentation for #Query, other than it takes standard queries.
As commented with the link post, that refers to aggregate, how does that work with #Query, plus one of the main answers uses $where, this inefficient.
The general way forward with the problem is unfortunately the data, although #Veeram's response is correct, it will mean that you do not hit indexes. This is an issue where you've got very large data sets of course and you will see ever decreasing return times. It's something $where, $arrayElemAt cannot help you with. They have to pre-process the data and that means a full collection scan. We analysed several queries with these constructs and they involved a "COLSCAN".
The solution is ideally to create a field that contains the last item, for instance:
{
name : 'John',
scores: [10, 12, 14, 16],
lastScore: 16
},
{
name : 'Mary',
scores: [78, 20, 14],
lastScore: 14
}
You could create a listener to maintain this as follows:
#Component
public class ScoreListener extends AbstractMongoEventListener<Scores>
You then get the ability to sniff the data and make any updates:
#Override
public void onBeforeConvert(BeforeConvertEvent<Scores> event) {
// process any score and set lastScore
}
Don't forget to update your indexes (!):
#CompoundIndex(name = "lastScore", def = "{"
+ "'lastScore': 1"
+ " }")
Although this does contain a disadvantage of a slight duplication of data, in current Mongo (3.4) this really is the only way of doing this AND to include indexes in the search mechanism. The speed differences were dramatic, from nearly a minute response time down to milliseconds.
In Mongo 3.6 there may be better ways for doing that, however we are fixed on this version, so this has to be our solution.

Using Prebid's price granularity high

We have our Price Granularity for Prebid set to high. However, since it's capped at $20, if we get a bid for $30 or $40 we're unable to accept it.
How can we stick with Price Granularity high with Prebid, but in instances we have a bid north of $20, automatically round down to $20 so that we can accept the bid.
Thank you..
According to Prebid documentation is should already be rounded down, but you can explicitly control it by adding the following code.
Assuming you're using the standard Prebid targeting keys
pbjs.bidderSettings.standard = {
adserverTargeting: [{
key: 'hb_bidder',
val: function val(bidResponse) {
return bidResponse.bidderCode;
}
}, {
key: 'hb_adid',
val: function val(bidResponse) {
return bidResponse.adId;
}
}, {
key: 'hb_pb',
val: function val(bidResponse) {
var cpm = bidResponse.cpm;
if (cpm > 20.00) {
return 20.00;
}
return (Math.floor(cpm * 100) / 100).toFixed(2);
}
}]
};
Add this after pbjs.addAdUnits( ad_units )
You’ll want to use the custom setting instead of using high to get what you want
See an example here: http://prebid.org/dev-docs/publisher-api-reference.html#customCPMObject
For the JSON youll want to redefine the same granularity settings for high in that JSON object and then add upon it to get additional granularity > $20.
Pro tips:
if you do not define values between $0-$x.xx that can act as a floor if you never want bids lower than x going to DFP
DFP caps the number of line items in an order at 450, so you may need an additional order(s) for your new granularity
always overlap the settings. Eg. Bucket 1 = $0-$3.00. Bucket 2 should start at $3.00-x (if you dont, a $3.009 bid wont be passed in to DFP

Google custom search how to set number of returned results

I am using google custom search API for searching a site and trying returning the maximum possible results by setting the num param to some high number 999 but this is sending error to me :
(400) Invalid Value
But when i set num value to 10 or lower it works perfectly, So it seems like google is putting some limit on returned results.
Here is my Google CSE link you can check by setting the num param
Google CSE API docs are here : API Docs
Any idea guys?
you can retrieve max 10 page with max 10 results.
in query you can use 'num' and 'start' params to lead your request.
num = 1-10 how many results to show
start = 1-100 the starting point
So, if you need max results, you must to do 10 request with num = 10 (default) and start = 1, 11, 21, ... 91
"queries": {
"nextPage": [
{
"title": "Google Custom Search - WTB rolex",
"totalResults": "3030",
"searchTerms": "WTB rolex",
"count": 10,
"startIndex": 11,
I think they want you to page through the result set. So, in this query, there are 3030 results and we're on page 1.
You can use the following parameters to specify which page you want:
"start": integer
It works, although I'm getting random 400s from it too (anything over 100 400s for me).