I'm using scripted test framework to test a sbt plugin. As part of that test it needs to download artifacts from a private artifact store. Scripted seems to ignore the credentials in the ~/.sbt/0.13 directory. How can I get my test to use those credentials?
It would be good if I didn't have to hard code any path.
I assume you have followed the official Publishing document and have a .credentials file containing realm, host, user, password:
realm=Sonatype Nexus Repository Manager
host=nexus.scala-tools.org
user=admin
password=admin123
Then in the build.sbt of your test add
credentials += Credentials(BuildPaths.defaultVersionedGlobalBase(sbtVersion.value) / "credentials")
This is my hack/workaround for the problem.
In scripted.sbt I add the following code.
credentials ++= {
val out = credentials.value.map {
case c: FileCredentials => s"""Credentials(new java.io.File("${c.path.getAbsolutePath}"))"""
case c: DirectCredentials => s"Credentials(${c.realm}, ${c.host}, ${c.userName}, ${c.passwd})"
}.mkString(" credentials ++= Seq(", ",", ")")
sbtTestDirectory.value.listFiles.flatMap(_.listFiles).map(f => IO.writeLines(f / "credentials.sbt", Seq(out)))
List()
}
This takes the existing sbt credentials and writes a credentials.sbt into each of the scriptet tests, e.g sbt-test/test-group/test-name/credentials.sbt. This is then used by the scripted tests.
My sbt knowledge is somewhat limited so this is a bit ugly.
Related
Currently we have 9 different URLs in our requirement scope and its implemented as Config file Application URL.
Every time if I have to change Application URL, I need to manually update the URL in config file and then I can execute require scenario, which is tedious task.
I would like to pass Application URL in my command line argument.
Current configuration of Config file.
#application.url=http://node-1.nginx.portal.da-1.can.qa.aws.com
#http://node-1.nginx.portal.da-1.QA1.aws.com
#http://node-1.nginx.portal.da-1.QA2.qa.aws.com
#http://node-1.nginx.portal.da-1.QA3.qa.aws.com
#http://node-1.nginx.portal.da-1.QA4.qa.aws.com
#http://node-1.nginx.portal.da-1.QA5.qa.aws.com
#http://node-1.nginx.portal.da-1.QA6.qa.aws.com
public void LaunchApplication() {
LOG.info("Launching web application URL: " + CONFIG.getProperty("application.url"));
driver.manage().deleteAllCookies();
driver.get(CONFIG.getProperty("application.url"));
}
Gonna make the assumption that you are running your selenium cucumber tests as a maven project.
Using maven you can create as many maven system properties as you like, I do this a lot for my mvn commands for my CI/CD build pipelines using Jenkins.
Here is what I would do
Update your method by adding a system property variable:
public void LaunchApplication() {
String appUrl = System.getProperty(applicationUrl);
LOG.info("Launching web application URL: " + appUrl);
driver.manage().deleteAllCookies();
driver.get(appUrl);
}
Pass the property as your mvn command, for example:
mvn test -Pcucumber -Dcucumber.options="--tags #app-smoke-001" -Dbrowser=chrome -Dclose_browser=yes -DapplicationUrl="http://node-1.nginx.portal.da-1.can.qa.aws.com"
I built an Electron app and I am now looking at how to distribute it.
I went with electron-builder to handle packaging etc.
For a bit of context, as a web developer, I am used to continuously deploy web apps on a web server but I have a hard time figuring out how to distribute a packaged one in Electron.
In electron-builder docs there is a brief mention about testing auto-update:
"Note that in order to develop/test UI/UX of updating without packaging the application you need to have a file named dev-app-update.yml in the root of your project, which matches your publish setting from electron-builder config (but in YAML format)"
But, it's rather vague...
So I actually have two questions:
1. How do I actually test the auto-update flow?
Do I need to actually publish a new version to trigger an update locally? Seems pretty unclear, it would be like developing against the production server.
2. Is it possible to have a fallback for unsigned code?
I don't have yet any certificate for code signing. So the OS/app will block the auto-update. But, I'd still want to tell the user that an update is available so they can go and download the app manually. Can I do that? (going back to point 1, I'd like to be able to test this flow)
I've just finished dealing with this. I also wanted to test against a non-production server and avoid having to package my app each time I iterated. To test downloads I had to sign my app, which slowed things down. But it sounds like you just need to check for updates. Which I think you can do as follows...
I created a dummy github repo, then created a a file dev-app-update.yml containing:
owner: <user or organization name>
repo: dev-auto-update-testing
provider: github
The path where this file is expected to be defaults to a place you can't access. Thankfully, you can override it like so:
if (isDev) {
// Useful for some dev/debugging tasks, but download can
// not be validated becuase dev app is not signed
autoUpdater.updateConfigPath = path.join(__dirname, 'dev-app-update.yml');
}
...that should be enough for your case -- since you don't need downloads.
If not, here are some other tips:
you can change the repo setting in your electron-builder config to point at your dummy repo then package your app. This will give you a packed, production build that points at your dummy repo -- this is how I did my download testing (though I have a cert, and signed my app)
you should be calling autoUpdate's checkForUpdates(), but if checkForUpdatesAndNotify() gives you a useful OS Notification then you should be able to set autoUpdater.autoDownload to false and end up with what you need.
Lastly, it sounds you could skip autoUpdater, since you won't be using the download feature anyway. Instead you could use github's releases api, assuming you use github to host your release. If not then your host should have something similar. Use that to check for updates then tell the user from within your App (could present them with a clickable URL too). If you want OS Notifications electron has a module for that.
We're using electron-updater with GitHub as a provider for auto-updates. Unfortunately, it breaks a lot and the electron-builder team doesn't support these issues well (1, 2, 3) (from my own experience, but you can find more examples on GitHub).
One way to test updates in dev mode:
Create a build of your app with an arbitrarily high version number
Create a public repo and publish the above build
Create a dev-app-update.yml next to your main entry point and configure it for the repo above (see)
In your main entry point:
import { autoUpdater } from "electron-updater";
...
if (process.env.NODE_ENV === "development") {
// Customize the test by toggling these lines
// autoUpdater.autoDownload = false
// autoUpdater.autoInstallOnAppQuit = false;
autoUpdater.checkForUpdates();
}
Then when running yarn dev you should see something like:
Checking for update
...
Found version 100.0.0 (url: <>.exe)
Downloading update from <>.exe
updaterCacheDirName is not specified in app-update.yml Was app build using at least electron-builder 20.34.0?
updater cache dir: C:\Users\<>\AppData\Local\Electron
New version 100.0.0 has been downloaded to C:\Users\<>\AppData\Local\Electron\pending\<>.exe
And it should install when you close the dev app.
This should give you some certainty but we still ran into issues in production. If you want to be sure, play through the full update flow with a test repo but packaged production apps just as you would do with the live one.
TL;DR: Why does everything run fine when started via IntelliJ, and why is it broken when call java -jar app.jar. And how do I fix this?
Alright, I have some issues with a backend I am trying to dockerize. I have an application created with Spring Boot (1.4.2.RELEASE) following the Spring Oauth (2.0.12.RELEASE) guide on their page. I follow the Gradle version, since I prefer Gradle over Maven. Also I am using Kotlin instead of Java. Everything is fine, I start via IntelliJ my backend with static front end, I can login via Facebook (and Google and Github), I receive a nice Principal witch holds al the information I need, and I can modify Spring Security to authorize and permit endpoints. So far so good.
Now for the bad part, when I run either ./gradlew clean build app:bootrun or ./gradlew clean build app:jar and run the jar via java -jar (like I will do in my Docker container), my backend comes up. My static front end pops up. Now I want to login via Facebook, I end up on the Facebook login page, I enter my credentials, and... nothing!
I end up back on my homepage, not logged in, no log messages that mean anything to me, just silence. The last thing I see in the log is:Getting user info from: https://graph.facebook.com/me
This Url will give me in my browser:
{
"error": {
"message": "An active access token must be used to query information about the current user.",
"type": "OAuthException",
"code": 2500,
"fbtrace_id": "GV/58H5f4fJ"
}
}
When going to this URL via an IntelliJ start, it will give me credential details. Obviously something is going wrong, but I have no clue what. Especially since a run from IntelliJ works fine. There is some difference between how the jar is started, and how IntelliJ's run config works, but I have no clue where to search for what. I could post trace logging, or all my Gradle files, but perhaps thats too much info to put in 1 question. I will defenitly update this question if someone needs some more details :)
The structure outline of this project is as follows:
root:
- api: is going to be opensourced later, contains rest definitions and DTOs.
- core: contains the meat. Also here is included in the gradle file
spring-boot-starter, -web, -security, spring-security-oauth2, and some jackson stuff.
- rest: contains versioned rest service implementations.
- app: contains angular webjars amongst others, the front end, and
my `#SpringBootApplication`, `#EnableOAuth2Client`
and the impl of `WebSecurityConfigurerAdapter`.
Why does everything run fine when started via IntelliJ, and why is it broken using bootRun or the jar artefact. And how do I fix this?
I found it, the problem was not Multi module Graldle, Spring boot, or Oauth2 related. In fact it was due to a src set config of Gradle, where Java was supposed to be in a Java src set folder, and Kotlin in a Java src set folder:
sourceSets {
main.java.srcDirs += 'src/main/java'
main.kotlin.srcDirs += 'src/main/kotlin'
}
As Will Humphreys stated in his comment above, IntelliJ takes all source sets, and runs the app. However, when building the jar via Gradle, these source sets are stricter. I had a Java file in my Kotlin src set, which is no problem for IntelliJ. But the jar created by Gradle takes into account the source sets as defined in the build.gralde file, which are stricter.
I found my missing bean issue with the code below:
#Bean
public CommandLineRunner commandLineRunner(ApplicationContext ctx) {
return args -> {
System.out.println("Let's inspect the beans provided by Spring Boot:");
String[] beanNames = ctx.getBeanDefinitionNames();
Arrays.sort(beanNames);
for (String beanName : beanNames) {
System.out.println(beanName);
}
};
}
The Bean I missed was called AuthenticationController, which is a #RestController, and kinda crucial for my authentication code.
I have a maven2 repository from which I'm trying to fetch an snapshot artifact with an appended timestamp. I'm (unsurprisingly) able to retrieve it fine when building with maven2 but when building with simple-build-tool (sbt), much preferred by me, I can't pull it down.
I can see from this question about snapshots in Ivy that it is possible to configure Ivy to get snapshot artifacts but I don't know how to tell sbt to do it.
The relevant bits of my current configuration:
val snapshotsName = "Snapshots Repository"
val snapshotsUrl = new java.net.URL("http://host:port/path/to/root")
val snapshotsPattern = "[organisation]/[module]/[revision]/[artifact]-[revision](-[classifier]).[ext]"
val snapshots = Resolver.url(snapshotsName, snapshotsUrl)(Patterns(snapshotsPattern))
Credentials(Path.userHome / ".ivy2" / ".credentials", log)
Update: After some more tinkering it looks like I can get it to point at the correct artifact url with the following pattern.
val snapshotsPattern = "[organisation]/[module]/[revision]-SNAPSHOT/[artifact]-[revision](-[timestamp]).[ext]"
With that I still need to specify the timestamp extra in the dependency
val dep = "group" % "artifact" % "0.0.1" extra("timestamp" -> "20101202.195418-3")
but it does pull the artifact. However it does NOT pull the artifact's dependencies. So it seems I've still got something wrong.
Alright, I got this sorted out but it wasn't actually an SBT problem it was 100% user error.
The Nexus I was using required authentication. I didn't have the authentication credentials set up correctly (see my authentication question) and because it wasn't properly authenticating it wasn't finding the metadata pom files and so SBT printed out the error message that it was failing I, incorrectly, assumed it was authenticating but not resolving. So I started messing with the patterns as evidenced in the actual question.
However, now that I have authentication setup correctly I changed back to just a regular repository declaration like so:
val snapshotsRepo = "Snapshots Repository" at "http://host:port/path/to/snapshots/root/"
and everything works. Artifacts are retrieved and dependencies resolved.
Is there a plugin for sbt available which automatically installs JRuby and adds some hooks to automatically run scripts at certain points (e.g. before compilation).
Background: For a (lift) project, I want to use sass, or more specifically, compass as a tool for generating the css. A Java or Scala clone of sass would not be enough, unfortunately.
Of course, It wouldn’t be a problem at all to just generate the css manually and then put both inside the repository and no-one ever needs to care about it.
On the other hand, to ease development on several systems, I’d rather have a clear dependency inside sbt which simply does the work.
Is there any way to achieve this?
Or generally: Is there a way to run JRuby scripts from inside Scala?
Edit Added maven-2 to the tags. Maybe there is a maven repo for JRuby which allows me to deliver and use it somehow.
While it's perhaps not the most elegant solution, you can always call external scripts using the Process support in SBT.
import sbt._
import Process._
class Project(info: ProjectInfo) extends DefaultProject(info) {
lazy val runSomething = task {
"ruby somescript.rb" ! log
None
}
}
There's a bit more info about the external process support available here: http://code.google.com/p/simple-build-tool/wiki/Process
Try my sbt plugin from github. It's very bare-bones for now, but it should download the jruby jar and allow you to call a .rb file before compiling.
The guts of the plugin are really simple:
import sbt._
object SbtJRuby extends Plugin {
object SbtJRubySettings {
lazy val settings = Seq(Keys.commands += jirb, tx, jrubyFile := file("fnord.rb"))
}
def jirb = Command.args("jirb", "<launch jirb>") { (state, args) =>
org.jruby.Main.main(List("-S", "jirb").toArray[String])
state
}
val jruby = TaskKey[Unit]("jruby", "run a jruby file")
val jrubyFile = SettingKey[File]("jruby-file", "path to file to run with JRuby")
val tx = jruby <<= (jrubyFile, Keys.baseDirectory) map { (f: File, b: File) =>
val rb = (b / f.toString).toString
// println("jruby with " + rb)
org.jruby.Main.main(List(rb).toArray[String])
}
}
Really all it's doing is calling the jruby jar file with the name of the rb file you've passed in. And of course adding jruby itself as a managed dependency:
libraryDependencies ++= Seq(
"org.jruby" % "jruby-complete" % "1.6.5"
)
It also adds a "jirb" command to the Scala console that puts you into a jirb session.