Maven 3 warnings: Failure to transfer asm:asm/maven-metadata.xml - maven-2

while building giraph jar with dependencies we are getting following warnings.. really not sure how to resolve these.. we already tried
useProjectArtifact as false
and
unpack as true
both dosent seem to work
any suggestion how to resolve these...??
[WARNING] Failure to transfer asm:asm/maven-metadata.xml from file:../../local.repository/trunk was cached in the local repository, resolution will not be reattempted until the update interval of local.repository has elapsed or updates are forced. Original error: Could not transfer metadata asm:asm/maven-metadata.xml from/to local.repository (file:../../local.repository/trunk): No connector available to access repository local.repository (file:../../local.repository/trunk) of type legacy using the available factories WagonRepositoryConnectorFactory
[WARNING] Failure to transfer asm:asm/maven-metadata.xml from file:../../local.repository/trunk was cached in the local repository, resolution will not be reattempted until the update interval of local.repository has elapsed or updates are forced. Original error: Could not transfer metadata asm:asm/maven-metadata.xml from/to local.repository (file:../../local.repository/trunk): No connector available to access repository local.repository (file:../../local.repository/trunk) of type legacy using the available factories WagonRepositoryConnectorFactory

This looks like a connection problem, proxy or firewall, so you can contour this solutions:
Explicit refer the ASM dependency. Take a look at the correct version and try to add it into your pom (http://mvnrepository.com/artifact/asm/asm). After that, execute mvn install to ensure that's everything ok.
If it doesn't work, you can try to manually download the dependency and copy it inside your local repository (local folder ".m2"), probably at "/.m2/asm/asm/". Isn't the best solution, but perhaps this can solve your problem.
Hope it helps!

Related

Unable to Publish Azure Data Factory Publish

I am trying to pulish an Azure Data Factory pipeline, however I'm getting the error:
Error The document creation or update failed because of invalid
reference 'master'. Please ensure 'master' exists in data factory mode
and recreate it in Git mode if already present.
I am familiar with the error. However, I'm can't find reference 'master'. Can someone let me know how to go about tracking down the reference 'master'?
Thanks
This issue is commonly caused by a mismatch between Data Factory mode and Git mode. It may happen when Git is first configured, or when changes are added directly in Git or in Live mode.
If you are unable to find and fix the conflict manually, you may re-sync the content in the Git Configuration page, by using either:
Overwrite Live Mode (which I recommend): Makes Data Factory mode (published) version match Git.
Import Resources: Makes a Git branch match Data Factory mode.
Git configuration page
Please be advised that overwriting live mode may result in losing changes not currently in Git. You may use Import Resources to persist changes prior to this.

dbt deps command results in "Unable to connect to registry hub"

When running dbt deps, I get back this error message:
Running with dbt=0.17.0
Error sending message, disabling tracking
Encountered an error:
Unable to connect to registry hub
What's happening here, and how can I work around it?
First of all, it's worth understanding what's going on here. It looks like you're trying to install a package from the dbt hub site (hub.getdbt.com) — if you open up your packages.yml file, you'll find something like this:
packages:
- hub: package-owner/package-name
version: 0.1.0
When you run dbt deps (at a high level):
dbt sends a request to hub.getdbt.com
From hub.getdbt.com, a request is sent to GitHub to download the package.
The package is copied into your project
This error occurs if dbt cannot connect to the hub site after sending a network request repeatedly. First off, we recommend you retry the dbt deps command — sometimes it's just a blip in connectivity that goes away on the second try.
If the error persists, there may be a few different reasons for it:
hub.getdbt.com might be unavailable. This happens but is relatively rare. You can navigate to hub.getdbt.com to check if this is the case. Also check the Netlify status page to see if there are any issues.
GitHub might be down — you can check this by going to the GitHub status page.
Finally, it may be that a firewall rule or antivirus software on your computer is rejecting the request. Talk to your IT team to find out if this is the case and whether that restriction can be removed.
We generally recommend using the hub syntax for packages, however if you need to work around it, you can consider using the git syntax (docs) or installing the package from a local directory (docs)

Removing "TooLongFrameException" restrictions (http)

I am using selenium with browsermob-proxy, ultimately powered by "netty-all", to access a site (outside my control) which offers up enormous headers as part of its authentication process. Proxy fails with a netty error:
io.netty.handler.codec.TooLongFrameException: HTTP header is larger than 16384 bytes., version: HTTP/1.1
I need to remove all such limits from netty-alljar that my browsermob-proxy depends on, scalability, performance and memory conservation are not relevant in this use case.
Having cloned the repo, I changed:
DEFAULT_MAX_FRAME_SIZE in WebSocket00FrameDecoder (io.netty.handler.codec.http.websocketx)
HttpObjectDecoder default constructor in io.netty.handler.codec.http
to Integer.MAX_VALUE where appropriate.
However, even with these new settings it keeps throwing out "HTTP header is larger than 16384 bytes" in use.
Where else could this 16384 limit be coming from?
How does one remove it while retaining full functionality (at the acceptable cost to efficiency/memory usage etc)
Arrived at solution, its far from elegant but it works - my use case is inefficiency/fault tolerant so use with care.
I wont pollute this answer with Maven shenanigans as they are not strictly relevant, however, please note that netty-all by default pulls all of its components from the Maven repo. To change netty-all internals you will need to produce a jar of a required component (handler.codec.http in this case), then change pom.xml to pull in your modified jar. There are several methods to do this, the only one that worked for me was using mvn install to place the jar in the local .m2 repo:
mvn install:install-file -Dfile=netty-codec-http-4.1.25.Final-SNAPSHOT.jar -DgroupId=io.netty -DartifactId=netty-codec-http -Dversion=4.1.25.Final-SNAPSHOT -Dpackaging=jar
Then build netty-all to get the final jar, which you then use in your own project instead of the original.
Files modified to remove size limits from http operation:
all/pom.xml
codec-http/pom.xml
codec-http/src/main/java/io/netty/handler/codec/http/HttpObjectDecoder.java
codec-http/src/main/java/io/netty/handler/codec/http/websocketx/WebSocket00FrameDecoder.java
codec-http/src/test/java/io/netty/handler/codec/http/HttpRequestDecoderTest.java
codec-http/src/test/java/io/netty/handler/codec/http/HttpResponseDecoderTest.java
Aside from setting various size restrictions to Integer.MAX_VALUE, I commented out relevant tests to ensure that Maven "package" command succeeds in producing the jar.
The git diff of the changes is available here:
https://gist.github.com/granite-zero/723fa55ae628494ff9b833dde1973a00
You could apply it as a patch to netty commit 04fac00c8c98ed26c5a75887c8e7e53b1e1b68d0

Idea, sbt, unable to reparse warning

I've pushed my artifact to oss nexus repo, added it as dependency to another project. Idea keeps me warning:
[warn] Unable to reparse com.github.kondaurovdev#jsonapi_2.11;0.1-SNAPSHOT from sonatype-snapshots, using Fri May 13 17:12:52 MSK 2016 [warn] Choosing sonatype-snapshots for com.github.kondaurovdev#jsonapi_2.11;0.1-SNAPSHOT
Maybe i pushed artifact somehow in a wrong way? But i did it earlier, everything was ok. How to get rid of these warnings? Or just ignore them?
I had the same issue.
Did you publish your SNAPSHOT version to your artifactory? If so this might be your problem.
As you know when publishing locally your snapshot version is stored in the .ivy2/local directory. The remote version are stored in the .ivy2/cache directory.
When looking into the .ivy2/cache/{dependency} folder you will see that it has only downloaded the xml and properties file. So just the metadata and no jars. This is the actual reason why it can't be parsed since it's not there.
Since the .ivy2/cache takes precedence over .ivy2/local it won't see your local published version. There are 2 ways to fix this.
Update your snapshot version number(recommended)
Remove the SNAPSHOT from your artifactory and remove the .ivy2/cache/{dependency} folder on every client that has a local version.
In my opinion the first one is the way to go.
I had the same issue, and it goes away after I add the follow in my build.sbt:
updateOptions := updateOptions.value.withLatestSnapshots(false)
You can find more detail from https://github.com/sbt/sbt/issues/2650

Maven deploy fails for Apache Archiva

I have a Maven project which generates a 413.06 KB jar file. I have to deploy it on Apache Archiva based managed repository. I have tried to deploy different versions, and it created required layout and structure, uploaded some files, even it uploaded that jar with 200~ KB. every time the jar file size changes but always it fails to upload 413.06 KB jar file.
Information:-
I am running standalone Archiva
I have given guest account to Global Repository Manager & "Repository Manager - MYREPO"
I have also tried a separate account in Archiva with "Repository Manager - MYREPO" rights and configured it in maven's settings.xml file to set custom timeout.
I am getting following error
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-deploy-plugin:2.7:deploy
(default-deploy) on project SharedshelfRepository: Error deploying artifact: Transfer error:
The server did not respond within the configured timeout. -> [Help 1]
that might be maven-deploy-plugin issue, resources plugin itself needs several dependencies,try manually jar nad p
What version of Maven are you using? You might try 3.0.4 as it has a different HTTP library. I'm also not sure if there's more context for what was happening when it timed out (it seems more request oriented rather than deploy oriented, and deploy does request some metadata).
I can't see that you'd need to alter the timeout, as none of the defaults should apply to such a small file. How long does it take to fail?