Removing "TooLongFrameException" restrictions (http) - selenium

I am using selenium with browsermob-proxy, ultimately powered by "netty-all", to access a site (outside my control) which offers up enormous headers as part of its authentication process. Proxy fails with a netty error:
io.netty.handler.codec.TooLongFrameException: HTTP header is larger than 16384 bytes., version: HTTP/1.1
I need to remove all such limits from netty-alljar that my browsermob-proxy depends on, scalability, performance and memory conservation are not relevant in this use case.
Having cloned the repo, I changed:
DEFAULT_MAX_FRAME_SIZE in WebSocket00FrameDecoder (io.netty.handler.codec.http.websocketx)
HttpObjectDecoder default constructor in io.netty.handler.codec.http
to Integer.MAX_VALUE where appropriate.
However, even with these new settings it keeps throwing out "HTTP header is larger than 16384 bytes" in use.
Where else could this 16384 limit be coming from?
How does one remove it while retaining full functionality (at the acceptable cost to efficiency/memory usage etc)

Arrived at solution, its far from elegant but it works - my use case is inefficiency/fault tolerant so use with care.
I wont pollute this answer with Maven shenanigans as they are not strictly relevant, however, please note that netty-all by default pulls all of its components from the Maven repo. To change netty-all internals you will need to produce a jar of a required component (handler.codec.http in this case), then change pom.xml to pull in your modified jar. There are several methods to do this, the only one that worked for me was using mvn install to place the jar in the local .m2 repo:
mvn install:install-file -Dfile=netty-codec-http-4.1.25.Final-SNAPSHOT.jar -DgroupId=io.netty -DartifactId=netty-codec-http -Dversion=4.1.25.Final-SNAPSHOT -Dpackaging=jar
Then build netty-all to get the final jar, which you then use in your own project instead of the original.
Files modified to remove size limits from http operation:
all/pom.xml
codec-http/pom.xml
codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java
codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocket00FrameDecoder.java
codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java
codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java
Aside from setting various size restrictions to Integer.MAX_VALUE, I commented out relevant tests to ensure that Maven "package" command succeeds in producing the jar.
The git diff of the changes is available here:
https://gist.github.com/granite-zero/723fa55ae628494ff9b833dde1973a00
You could apply it as a patch to netty commit 04fac00c8c98ed26c5a75887c8e7e53b1e1b68d0

Related

setting NODE_EXTRA_CA_CERTS with dotenv does not work as an export

I feel puzzled by the following behavior. In the very beginning of my main index.js, I am using
require('dotenv').config();
console.log(process.env); // everything seems in order
I know that the rest of my code successfully access all the relevant process.env.${VARS}. However, I get SSL exceptions; exceptions that I can easily solve by
export NODE_EXTRA_CA_CERTS=/some/absolute/path/to/ca.pem
npm start
Is there something special about NODE_EXTRA_CA_CERTS that would explained why this specific variable set with require('dotenv').config() does not work while the others work like a charm?
Does it need to be set before running npm? If it does, why is it the case and are there any workaround so I could keep thing simple?
environement:
dotenv 16.0.0
node v16.13.2
neardupe How to properly configure node.js to use Self Signed root certificates? .
Your problem is not in npm. npm start runs your application, typically (but not necessarily) by running node (or whatever spelling on your platform) to run your js code. When you use node to run js, NODE_EXTRA_CA_CERTS is read and saved in the C-code part of node at startup, before beginning to execute js, and subsequent changes in js variables like process.env do not affect it.
The clean way to do this in js is to pass the desired CAlist -- which can consist of the standard list (from tls.rootCertificates) plus any additions (or replacements or deletions) you choose -- in the (relevant) TLS socket creation, or any https request that implicitly creates a TLS socket; or alternatively to use --use-openssl-ca and select an OpenSSL-format store provided by your system (modified if necessary by system means like update-ca-certificates on Debian/Ubuntu) or one you create.
Or when using npm as you do, it should be possible to configure your package.json to set the envvar before running the application in node.
If you can't do either/any of those, especially where you control the toplevel (and startup) but call libraries you can't [safely] change, see the Q I linked above. For https connections that use the default https.globalAgent you can (documentedly) set that per the A. For all connections, you can monkeypatch tls.createSecureContext to use the undocumented context.addCACert as in the Q, which OP confirmed in the A does actually work if using a correct cert.

Adding SSL parameters to pepper_box config in jmeter

I'm trying to test a kafka stream on jmeter using the pepper box config, but each time I try adding java request parameters it goes back to the default parameters without saving the ones I have added. I have tried the recommendations on here of adding the underscore, so _ssl.enabled, but the params are still disappearing. Any recommendations? Using jmeter5.3 and pepper-box1.0
I believe you need to put your SSL properties to the PepperBoxKafkaSampler directly, there are pre-populated placeholders which you can change and the changes persist.
The same behaviour is for Java Request Defaults
It might be the case your installation got corrupt somehow or there is a conflict with another JMeter Plugin, check jmeter.log file for any suspicious entries
In the meantime you may find Apache Kafka - How to Load Test with JMeter article useful
I had the same issue. I got around this issue by cloning the pepperbox repository https://github.com/GSLabDev/pepper-box and made changes to the PepperBoxKafkaSampler.java file, updated the setupTest() method with your props. You can also add the parameters making use of the .addArgument() method (used in PepperBoxKafkaSampler.java) to make the parameters available in jmeter.
Rebuild the repo using maven mvn clean install replace the old pepperbox jar in jmeter/lib/ext with your new built jar.

Ignore packagist.org on composer install | update

I'm using composer internally for managing internal software dependencies. Our repository server is on our private network and we aren't using any other package from any other repository than ours.
Every time you run
composer.phar [install | update]
It checks on packagist.org repositories after check our own repository. Beyond unnecessary, it takes longer when packagist is slow (or even down) or our internet connection is having a bad day.
Is there any way to tell composer to ignore checking for packagist repositories?
Yes, and it is even documented on https://getcomposer.org/doc/05-repositories.md#disabling-packagist-org
You may try to use this command:
$ composer config repositories.packagist false
You probably want to have a look at Satis: http://getcomposer.org/doc/articles/handling-private-packages-with-satis.md
It will make your life easier if you deal with a bit more of local/private packages, because otherwise you'd have to mention EVERY repository that might host required code. And you can use Satis to grab a copy of the versions into a ZIP file that can be hosted locally as well. See http://www.naderman.de/slippy/src/?file=2012-11-22-You-Thought-Composer-Couldnt-Do-That.html#13 for some hints of how to do it (press cursor keys left/right to skip through the presentation)
For extra bonus points, you'd add packagist.org as a Composer repository to Satis, require some needed packages, and set { "require-dependencies": true } to grab their dependencies as well. In your own code, you'd only set your Satis repository and disable Packagist.

Maven 3 warnings: Failure to transfer asm:asm/maven-metadata.xml

while building giraph jar with dependencies we are getting following warnings.. really not sure how to resolve these.. we already tried
useProjectArtifact as false
and
unpack as true
both dosent seem to work
any suggestion how to resolve these...??
[WARNING] Failure to transfer asm:asm/maven-metadata.xml from file:../../local.repository/trunk was cached in the local repository, resolution will not be reattempted until the update interval of local.repository has elapsed or updates are forced. Original error: Could not transfer metadata asm:asm/maven-metadata.xml from/to local.repository (file:../../local.repository/trunk): No connector available to access repository local.repository (file:../../local.repository/trunk) of type legacy using the available factories WagonRepositoryConnectorFactory
[WARNING] Failure to transfer asm:asm/maven-metadata.xml from file:../../local.repository/trunk was cached in the local repository, resolution will not be reattempted until the update interval of local.repository has elapsed or updates are forced. Original error: Could not transfer metadata asm:asm/maven-metadata.xml from/to local.repository (file:../../local.repository/trunk): No connector available to access repository local.repository (file:../../local.repository/trunk) of type legacy using the available factories WagonRepositoryConnectorFactory
This looks like a connection problem, proxy or firewall, so you can contour this solutions:
Explicit refer the ASM dependency. Take a look at the correct version and try to add it into your pom (http://mvnrepository.com/artifact/asm/asm). After that, execute mvn install to ensure that's everything ok.
If it doesn't work, you can try to manually download the dependency and copy it inside your local repository (local folder ".m2"), probably at "/.m2/asm/asm/". Isn't the best solution, but perhaps this can solve your problem.
Hope it helps!

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)