I have an Artifactory repo that sits behind basic authentication. How would I configure the settings.xml to allow access?
<mirrors>
<mirror>
<id>artifactory</id>
<mirrorOf>*</mirrorOf>
<url>https://myserver.example.com/artifactory/repo</url>
<name>Artifactory</name>
</mirror>
</mirrors>
<servers>
<!--
This server configuration gives your personal username/password for
artifactory. Note that the server id must match that given in the
mirrors section.
-->
<server>
<id>Artifactory</id>
<username>someArtifactoryUser</username>
<password>someArtifactoryPassword</password>
</server>
So server tag is the user credentials for the artifactory user, but I also need to provide another user/password to get through the basic-auth. Where would I put that?!?
The username and password go in the server settings as you have them. I think your problem is that you've specified the server by its name (Artifactory), rather than its id (artifactory).
I'd recommend you put the server settings in your user settings rather than the global settings. You can also encrypt the password in Maven 2.1.0+, see the mini guide for details.
Update: What version of Artifactory are you using? There is a discussion and corresponding issue that basic-auth fails. This has apparently been fixed in 2.0.7 and 2.1.0.
From the discussion, it seems that a workaround is to pass the properties via the command line, e.g.
-Dhttp.proxyHost=proxy -Dhttp.proxyPort=8080 -Dproxy.username=... -Dhttp.password=...
Update: To let your Maven installation connect through a firewall, you'll need to configure the proxy section of the settings.xml, see this question for some pointers on doing that.
Update2: There are additional properties you can set in the server settings, see this blog for some background. I've not had an opportunity to test this, but from the blog and related http wagon javadoc, it appears you can set authenticationInfo on the server settings, something like this:
<server>
<id>Artifactory</id>
<username>someArtifactoryUser</username>
<password>someArtifactoryPassword</password>
<configuration>
<authenticationInfo>
<userName>auth-user</userName>
<password>auth-pass</password>
</authenticationInfo>
</configuration>
</server>
I was able to use the following configuration to enable HTTP basic authentication - by writing the necessary HTTP headers manually. In my situation I used it to access the build artifacts on my Go server as a poor man's staging repository.
<server>
<id>go</id>
<configuration>
<httpHeaders>
<property>
<name>Authorization</name>
<!-- Base64-encoded "guest:guest" -->
<value>Basic Z3Vlc3Q6Z3Vlc3Q=</value>
</property>
</httpHeaders>
</configuration>
</server>
Tip to solve the problem with the clear text password:
Access and login into Artifactory.
Once you are logged in, click over your user name, on the superior right corner of the screen.
Put your password then clique in the em Unlockbutton, enabling the encrypted password.
Copy the tag that will be showed on the inferior part of the screen and paste it into the settings.xml file. If you prefer to just copy the password, be sure about let it exactly equals the tag showed below, including the "\" at the beginning of the password.
Remember to adjust the tag with the id of your server, defined into the tag, in your POM.xml
Click in Update button and ready! Check if everything will occur well at the next project's publication.
Most of the time this happens due to following reasons
Settings.xml is not found inside the ~/.m2/conf or ~/.m2 directories
Settings.xml doesn't contain the correct credentials Username or the API Key to access the Artifactory
Cannot access the Artifactory mentioned in the Settings.xml with the given credentials
You can see what Maven looks for in what directories... Running with -e -X options will show these details
Sometimes the Settings.xml file may be placed inside a different directory, so when you build locally, maven cannot find it inside ~/.m2 or ~/.m2/conf directories. For e.g if the project uses buildkite, it may be inside a directory that buildkite can access during the build process. In that case, just copy the same Settings.xml file into ~/.m2.
To find the aritifactory, logon to the url using your email/password and go to profile and locate the username and the API Key like in the pic.
Then check the repository is correct in Settings.xml
Related
I set up a local Trac wiki using a conda env, where I installed all dependencies, except the system packages, which I installed in the system:
trac-admin . initenv
I entered the required infos like project name there.
Then I run the Trac standalone server.
tracd --port 8000 .
inside the directory, where I set up the wiki.
Since this is supposed to be a wiki, which I use locally myself and not for any multiuser setup, I don't need any authentication functionality. How can I deactivate any authentication or need for login for Trac?
I know that I don't have permissions, because I used the quick search field for a page, which could not exist and when the wiki showed no results, it didn't offer a create page button of any kind. According to the StartPage, this means I don't have permissions.
I couldn't find any enable/disable setting for this in the ./conf/trac.ini to do this. It would also be acceptable to find an easy way to create a user, as which I login to Trac, but all the guides from Trac documentation assume prior knowledge of some kind of configuration files and they don't explain those exactly. For example here. Where does that configuration file even go and what kind of syntax does this use? Not really helpful.
You are correct that you need to modify the permissions (authorization). The permissions are stored in the database rather than trac.ini. You need to grant permission using trac-admin utility. See TracPermissions.
trac-admin $env permission add anonymous WIKI_CREATE WIKI_MODIFY WIKI_DELETE WIKI_RENAME
For help, execute:
trac-admin $env permission help
If you wish to setup authentication, see TracStandalone: UsingAuthentication.
I am configuring spengo/tomcat/sso/ on windows 7.
I dont understand what is a Realm and where it is configured !
From reading the following guides:
https://tomcat.apache.org/tomcat-7.0-doc/realm-howto.html
https://dzone.com/articles/do-not-publish-configuring-tomcat-single-sign-on-w
I understand a realm is a DB of users/password, This DB data can be taken from several places, DB, Active directory, users.xml file, etc..
When configuring krb5.ini and jaas.config I need to provide a realm name, Where do i find this Realm name, On our Active Directory machine - No system admin ever created a Realm object, So how do i know what is the Realm name to enter in configuration ?
Spengo w/ SSO require JAASRealm, So why I need to setup
Realm className="org.apache.catalina.realm.JNDIRealm In the server.xml ?
Does JAASRealm is a wrappher that uses JNDI in order to work with AD?
Thanks
I understand a realm is a DB of users/password, This DB data can be taken from several places, DB, Active directory, users.xml file, etc.
It is a service, not just a database. It can be implemented via an XML file, a JNDI interface, a JDBC interface, JAAS, several others.
When configuring krb5.ini and jaas.config I need to provide a realm name. Where do i find this Realm name
You configure it, in a Realm entry in either your context.xml file or, if you want it global across webapps, in server.xml. Then you refer to that name in the files you mention.
On our Active Directory machine - No system admin ever created a Realm object,
Of course not. They don't exist in AD servers. You're looking in the wrong place.
So how do i know what is the Realm name to enter in configuration ?
In this case you would use a JNDI or JAAS realm.
Spengo w/ SSO require JAASRealm
So there's your answer.
So why I need to setup
Realm className="org.apache.catalina.realm.JNDIRealm In the server.xml?
You don't. You need to configure a JAAS realm, as you just said above. Unclear why you think a JNDI realm is required here.
Does JAASRealm is a wrappher that uses JNDI in order to work with AD?
You need to look some of these things up. JAAS is a service that can use any combination of login modules, including some you can write yourself. What they do is up to them, and to you if you write them. Too broad to answer here.
This question is related to Weblogic 12c.
I have an EAR file that I want to deploy in various environments (dev, QA, pre-prod and prod). However, my application requires a username and a password (to connect to another server) and they're not the same across the four environments. I don't want to package 4 different property files in 4 different EAR files. I want a single generic EAR file. Beside, I don't want to handle the prod password during packaging.
Ideally, I'd like the admin of each environment to provide the appropriate username nad password for the environment. Unlike Tomcat, Jetty or JBoss(?), I think it's not possible for a WebLogic Admin to specify this information in a way that it will become available under the java:comp/env JNDI context.
How can an application obtain some admin-defined configuration strings from Weblogic?
BTW, it's not a username/password for a JDBC connection.
From what I understand, you need to change parameters based on the environment you are using right?
If you would like to override parameterss on the fly you can use WebLogic deployment plan concept.
Did you mean that you need to provide username/password to start-up the application?
If so, you may accomplish that by creating a script with WLST http://docs.oracle.com/cd/E15051_01/wls/docs103/config_scripting/using_WLST.html
As far as I know, the WebLogic way is to
Define your username/password as env-entry in the deployment descriptor
Deploy your application together with the plan.mxl whereas each environment admin maintains his own envrionemnt-specific version of the plan.xml
That way you get them into /comp/env/config
More details here: http://docs.oracle.com/cd/E11035_01/wls100/deployment/config.html
Only drawback known to me: plan.xml will always contain the unencrypted password but as the admin knows the password anyway and this is "his" file on "his" maschine that should be fine.
I'm trying to convince the higher-ups at my work place to migrate to Apache Ivy. I've managed to get a few sandbox projects working using Ivy to power the build, and now I have a greenlight to put together a migration proposal.
We all agree on one thing: we don't want to trust JARs that are located in public directories! I know, I know, a bit paranoid, yes. But we'd like to have a setup where we pull a JAR from a trusted source (either downloading it from the open source project itself, or most likely, gulp, a public repo), and use it for some time before we "certify" it (give it our blessing as a safe artifact to use).
Then we want to have a common repository for all JARs used by our many projects.
My original thinking was to place this repository up in version control (we have an SVN server). But I wasn't sure what best practices dictate. It might make more sense to put our JARs on a file server and FTP to them in the Ivy script.
Either way, SVN (HTTPS) or FTP, all of our servers are authenticated. So, a small number of questions:
Where should we be publishing all of our "certified" JARs (everything from `log4j` to any homegrown JARs we produce)? What do best practices dictate?
The "ivyrep" resolver-type does not take username or passwd atrributes. If our "JAR server" (FTP, SVN, etc.) is authenticated, how do I configure the Ivy scripts to login?
I must echo Brian's recommendation to use a repository manager like Nexus. It's a lot less work in the long run. You'll also discover that the professional version of Nexus enables you to create approval processes around repositories which you plan to use in your build. See the procurement suite functionality.
If, on the other hand, you are determined to build your own repository, then ivy has the tools for the job. You need to become very familiar with the ivy settings file and how it declares and uses resolvers.
If repository is accessible via HTTPS the the url resolver should be able to access it. The resolver will assume that each version of an artifact is in a different directory and you'll need to specify the URL pattern that ivy will need to use when accessing the repository:
<url name="two-patterns-example">
<ivy pattern="http://ivyrep.mycompany.com/[module]/[revision]/ivy-[revision].xml" />
<artifact pattern="http://ivyrep.mycompany.com/[module]/[revision]/[artifact]-[revision].[ext]" />
</url>
The pattern is fully flexible to how you store the artifacts.
Authentication is also handled in the settings file using the credentials tag.
Finally, the FTP protocol is also supported. It's hard to find in the doco, but it's supported by the vfs resolver.
I think that's enough information on an option I don't recommend :-) Having said that I once created an FTP based repository for managing releases to clients. It's useful to have a tool this powerful :-)
Why not use something like Sonatype's Nexus. I've seen it used for Maven, and I believe it'll work for Ivy.
You can set it up to download from remote repositories into (say) a 'test' repository. You can then evaluate those .jars, and if they're good, upload them into an 'approved' repository for general consumption. There's some authentication surrounding this, but you'd have to evaluate that in greater depth. Certainly you can restrict the uploading into repositories via a username/password pair.
So I'm not sure what the best way to accomplish this is, but basically I have a laptop that I use at work for Maven projects. It works fine when I'm at work, but as soon as I walk out of the door of their corporate proxy and maven server, I often have to do alot of hand-fudging of the settings.xml file when I'm at home if I'm not VPN'ed in:
We have a corporate-installed Maven Repository proxy server to store some of our own artifacts and handle being the middle-man for our commonly used artifacts.
We have an http proxy that we use for connecting to the outside world.
Both configurations have been handled by my settings.xml file for setting a single Nexus group and maven proxies. If I'm not connected to the VPN while away from the office, I have to muck around with the settings.xml each time I'm not on it, then switch it back when I am on it.
What solutions have anyone else found to handle this? I've been trying profiles to manage the proxy, but I can't seem to get it to work correctly, and it's starting to look pretty ugly. Are there some settings configurations that can detect when I'm not behind the proxy at work and not use the corporate proxy server or Maven server?
While I can think of some profile based solution to handle the proxy (basically, reading the <active> value from a property defined in a profile), this wouldn't be fully automated (the profile activation do not support network based stuff) unless you can find a file that is present or not depending on your location (in which case, you could use an existing/missing file trigger but this is kinda hacky). Anyway, this would solve only one part of the problem because mirrors can't be declared in profiles (see MNG-3525).
So, instead of trying to control this with a profile, my suggestion would be to use two settings.xml and to pass your settings-home.xml file with the -s command line option when you're at home.
Another option would be to automate the changes in your settings.xml with a script (Groovy would be a good choice as someone reported in MNG-3525).
I found a use environment variables to set nonProxyHosts together with proxy and noproxy shell aliases to be the most convenient solution when switching between networks with proxy and without it.
In settings.xml, configure proxy with
<host>proxy.corporation.int</host>
<port>8080</port>
<nonProxyHosts>${env.MAVEN_NONPROXY}</nonProxyHosts>
Then in ~/.profile set
export MAVEN_NONPROXY_PROXY='*.corporation.int|local.net|some.host.com'
export MAVEN_NONPROXY_NOPROXY='*'
alias proxy="export MAVEN_NONPROXY=\"$MAVEN_NONPROXY_PROXY\" && export all_proxy=http://proxy.corporation.int:8080"
alias noproxy="export MAVEN_NONPROXY=\"$MAVEN_NONPROXY_NOPROXY\" && unset all_proxy"
To do the switch when roaming, you would just execute from a shell:
[me#linuxbox me]$ proxy
or
[me#linuxbox me]$ noproxy
Obviously, both aliases proxy and noproxy can include much more changes than just setup of MAVEN_NOPROXY and all_proxy.
I was frustrated by the same problem: having to manually edit settings.xml when roaming between networks. So much in fact, that I wrote a Maven plugin that enables automatic discovery of proxy settings. The current implementation uses the proxy-vole library written by Bernd Rosstauscher to detect proxy settings based on OS configuration, browser, and environment settings.
I've just released the source code of the plugin on Github, under an Apache 2.0 license: https://github.com/volkertb/autoproxy-maven-plugin
You're welcome to give it a try and to see if it meets your needs. Any feedback or contributions are welcome!
(Note: you don't necessarily have to add the plugin to your project's POM. You can invoke it from the command line as well, after you've installed it. See the README on the site for more details.)
You can set MAVEN_OPTS when you need to activate a proxy:
export MAVEN_OPTS="-Dhttp.proxyHost=my-proxy-server -Dhttp.proxyPort=80 -Dhttp.nonProxyHosts=*.my.org -Dhttps.proxyHost=my-proxy-server -Dhttps.proxyPort=80 -Dhttps.nonProxyHosts=*.my.org"