I'm prototyping a simple "output" filter with Apache + mod_lua. How can I read response body, at the end of other native output filters applied, via LUA? For example, can I get the actual response that will be sent to the client?
The manual has some good guidance on this:
http://httpd.apache.org/docs/current/mod/mod_lua.html#modifying_buckets
Modifying contents with Lua filters Filter functions implemented via
LuaInputFilter or LuaOutputFilter are designed as three-stage
non-blocking functions using coroutines to suspend and resume a
function as buckets are sent down the filter chain. The core structure
of such a function is:
function filter(r)
-- Our first yield is to signal that we are ready to receive buckets.
-- Before this yield, we can set up our environment, check for conditions,
-- and, if we deem it necessary, decline filtering a request alltogether:
if something_bad then
return -- This would skip this filter.
end
-- Regardless of whether we have data to prepend, a yield MUST be called here.
-- Note that only output filters can prepend data. Input filters must use the
-- final stage to append data to the content.
coroutine.yield([optional header to be prepended to the content])
-- After we have yielded, buckets will be sent to us, one by one, and we can
-- do whatever we want with them and then pass on the result.
-- Buckets are stored in the global variable 'bucket', so we create a loop
-- that checks if 'bucket' is not nil:
while bucket ~= nil do
local output = mangle(bucket) -- Do some stuff to the content
coroutine.yield(output) -- Return our new content to the filter chain
end
-- Once the buckets are gone, 'bucket' is set to nil, which will exit the
-- loop and land us here. Anything extra we want to append to the content
-- can be done by doing a final yield here. Both input and output filters
-- can append data to the content in this phase.
coroutine.yield([optional footer to be appended to the content])
end
Related
I have got requirement saying, blob storage has multiple files with names file_1.csv,file_2.csv,file_3.csv,file_4.csv,file_5.csv,file_6.csv,file_7.csv. From these i have to read only filenames from 5 to 7.
how we can achieve this in ADF/Synapse pipeline.
I have repro’d in my lab, please see the below repro steps.
ADF:
Using the Get Metadata activity, get a list of all files.
(Parameterize the source file name in the source dataset to pass ‘*’ in the dataset parameters to get all files.)
Get Metadata output:
Pass the Get Metadata output child items to ForEach activity.
#activity('Get Metadata1').output.childItems
Add If Condition activity inside ForEach and add the true case expression to copy only required files to sink.
#and(greater(int(substring(item().name,4,1)),4),lessOrEquals(int(substring(item().name,4,1)),7))
When the If Condition is True, add copy data activity to copy the current item (file) to sink.
Source:
Sink:
Output:
I took a slightly different approaching using a Filter activity and the endsWith function:
The filter expression is:
#or(or(endsWith(item().name, '_5.csv'),endsWith(item().name, '_6.csv')),endsWith(item().name, '_7.csv'))
Slightly different approaches, similar results, it depends what you need.
You can always do what #NiharikaMoola-MT suggested . But since you already know the range of the files ( 5-7) , I suggest
Declare two paramter as an upper and lower range
Create a Foreach loop and pass the parameter and to create a range[lowerlimit,upperlimit]
Create a paramterized dataset for source .
Use the fileNumber from the FE loop to create a dynamic expression like
#concat('file',item(),'.csv')
I have JMeter plan that starts with a single JDBC sampler query that captures session ID from the Teradata database (SELECT SESSION;). Same plan also has large number of JDBC samplers with complicated queries producing large output that I don't want to include in the report.
If I configure summary report and tick Save Response Data (XML) then the output from all sampler queries will be saved
How do I add only first query result (it's a single integer) into the test summary report and ignore results from all other queries? For example is there a way to set responseData = false after the first query output is captured?
Maybe sample_variables property can help?
Define something in "Variable Names" section of the JDBC Request, i.e. put session reference name there like:
Add the next line to user.properties file (lives in Jmeter's "bin" folder)
sample_variables=session_1
or alternatively pass it via -J command-line argument like:
jmeter -Jsample_variables=session_1 -n -t /path/to/testplan.jmx -l /path/to/results.csv
You need to use session_1 not session. As per JDBC Request Sampler documentation:
If the Variable Names list is provided, then for each row returned by a Select statement, the variables are set up with the value of the corresponding column (if a variable name is provided), and the count of rows is also set up. For example, if the Select statement returns 2 rows of 3 columns, and the variable list is A,,C, then the following variables will be set up:
A_#=2 (number of rows)
A_1=column 1, row 1
A_2=column 1, row 2
C_#=2 (number of rows)
C_1=column 3, row 1
C_2=column 3, row 2
So given your query returns only 1 row containing 1 integer - it will live in session_1 JMeter Variable. See Debugging JDBC Sampler Results in JMeter article for comprehensive information on working with database query results in JMeter.
When test completes you'll see an extra column in .jtl results file holding your "session" value:
Although not exactly solving your question as posted, I will suggest a workaround, using a "scope" of a listener (i.e. listener will only record items on the same or lower level than a listener itself). Specifically: have two Summary Reports: one on the level of test, the other (together with the sampler whose response you want to record) under a controller. For example:
here I have samplers 1, 2, 3, 4. I only want to save response data from sampler 2. So
Summary Report - Doesn't save responses is on global level, and it's configured to not save any response data. It only saves what I want to save for all samplers.
Summary Report - Saves '2' only is configured to save response data in XML format. But because this instance of Summary Report is under the same controller as sampler 2, but other samplers (1, 3, 4) are on higher level, it will only record responses of sampler 2.
So it doesn't exactly allow you to save response data from one sampler into the same file as all other Summary Report data. But at least you can filter which responses you are saving.
May be you can try assertion for ${__threadNum}
i.e. set condition for assertion as "${__threadNum}=1" and set your listner's "Log/display only" option as "successes"
This way it should log only the first response from samplers.
I think I have a general problem with understanding the structure of matches and the scope in which variables of the match live.
The specific piece of code where I have the problem with is this:
// S sentiment toward A goodFor/badFor T
// => S sentiment toward the idea of A goodFor/badFor T
MATCH (S:A)-[:SOURCE]->(sent1:PS {type:"sentiment"})-[:TARGET]->(gfbf:E {type:"gfbf"}) , (A)-[:SOURCE]->(gfbf)-[:TARGET]->(T) , (Writer:A {type:"writer"})
// if there is some negative belief in any of the writers private state spaces that involve gfbf then inference is blocked
WHERE NOT (Writer)-[*1..]->({type:"believesTrue" , spec:FALSE})-[*1..]->(gfbf)
// if sent1 is in some private state spaces of the writer return all of these
OPTIONAL MATCH p=(Writer)-[*]->(sent1)
WITH NODES(p)[1..-1] AS ps_nodes
WHERE ALL(x IN ps_nodes[1..] WHERE LABELS(x) = "PS")
MERGE (S)-[:SOURCE]->(sent2:PS {type:"sentiment" , spec:(sent1.spec)})-[:TARGET]->(ideaOf:I {name:"ideaOf" , type:"ideaOf"})-[:TARGET]->(gfbf)
ON CREATE SET sent2.name =
CASE sent2.spec
WHEN FALSE THEN "-S"
ELSE "+S"
END
RETURN p
I think it's not relevant to understand what this is for. It suffices to see the structure I assume, but basically what it does is: It looks for a subgraph where there is path S-->sent1-->gfbf and also a path A-->gfbf-->T. If it finds that is makes a new path A-->sent2-->ideaOf-->gfbf, all he while setting the properties of the new nodes depending on the properties of the nodes from the match. Furthermore it looks whether it also has a path writer-->...-->sent where all nodes in the ... part have label PS. If it finds that path then it returns this for further operations in a different part of the program.
The error I am getting is this:
py2neo.cypher.error.statement.InvalidSyntax: sent1 not defined (line 6, column 58 (offset: 421))
"MERGE (S)-[:SOURCE]->(sent2:PS {type:"sentiment" , spec:(sent1.spec)})-[:TARGET]->(ideaOf:I {name:"ideaOf" , type:"ideaOf"})-[:TARGET]->(g"bf)
Why is sent1 no longer defined where I use it and how would I need to restructure the code to make it valid?
sent1 in isn't in the prior WITH - change it so:
WITH NODES(p)[1..-1] AS ps_nodes, sent1
I have a crawler that works just fine in collecting the urls I am interested in. However, before retrieving the content of these urls (i.e. the ones that satisfy rule no 3), I would like to update them, i.e. add a suffix - say '/fullspecs' - on the right-hand side. That means that, in fact, I would like to retrieve and further process - through callback function - only the updated ones. How can I do that?
rules = (
Rule(LinkExtractor(allow=('something1'))),
Rule(LinkExtractor(allow=('something2'))),
Rule(LinkExtractor(allow=('something3'), deny=('something4', 'something5')), callback='parse_archive'),
)
You can set process_value parameter to lambda x: x+'/fullspecs' or to a function if you want to do something more complex.
You'd end up with:
Rule(LinkExtractor(allow=('something3'), deny=('something4', 'something5')),
callback='parse_archive', process_value=lambda x: x+'/fullspecs')
See more at: http://doc.scrapy.org/en/latest/topics/link-extractors.html#basesgmllinkextractor
I need count rows in table. So I use reg. exp. extractor. But response assertion ends with error and tried to find exactly rows:${ROWS_matchnr}.
I tried google, but I only find a few non-functional recommendations.
Thread Group
Http Cookie Manager
Http Request
Regular Expression Extractor(ROWS, row-(.*), $1$, 0, )
Response Assertion(rows:${ROWS_matchnr})
Change the value you have in the Match No field from 0 to -1. As documented in the, ahem, useless official help:
If the match number is set to a negative number, then all the possible matches in the sampler data are processed. The variables are set as follows:
refName_matchNr - the number of matches found; could be 0
refName_n, where n = 1,2,3 etc - the strings as generated by the template
refName_n_gm, where m=0,1,2 - the groups for match n
refName - always set to the default value
refName_gn - not set
Then change ${ROWS_matchnr} to ${ROWS_matchNr} (capital N) and it should work.
If you still have issues then use a Debug Sampler to see what is getting returned from the regex.