Currently, the default and latest Ruby in Nix is 2.2.2-p0. When I run nix-env -qaP ruby it returns a list, which says that this ruby version is accessed via nixpkgs.ruby. I expect that this Ruby link will change to stay up-to-date with the latest supported ruby version. There is no optional nixpkgs.ruby_2_2_2 for me to use to ensure my ruby version.
Looking at the .nix definition file at https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/interpreters/ruby/ruby-2.2.2.nix, however, I see that they specify the revision in that script.
So I'm wondering, is there some way for me to specify the revision of the Nix package that I want when I'm listing it in the buildInputs of my Nix expression for creating the development environment (which will be accessed via nix-shell .)? Or is there something else that I might do that would enable me to ensure that ruby 2.2.2-p0 is used for the installation, and not just the latest Ruby, which might change?
Current script:
let
pkgs = import <nixpkgs> {};
in with pkgs; {
rubyEnv = stdenv.mkDerivation rec {
name = "ruby-env";
version = "0.1";
src = ./.;
buildInputs = [
stdenv
ruby
bundler_HEAD
];
};
}
I didn't see this covered in the documentation at http://nixos.org/nix/manual/#chap-writing-nix-expressions
There is no optional nixpkgs.ruby_2_2_2 for me to use to ensure my
ruby version.
Actually there is a ruby_2_2_2 in nixpkgs:
$ nix-env -qaP ruby
nixos.ruby_1_8 ruby-1.8.7-p374
nixos.ruby_1_9 ruby-1.9.3-p551
nixos.ruby_2_0 ruby-2.0.0-p645
nixos.ruby_2_1_0 ruby-2.1.0-p0
nixos.ruby_2_1_1 ruby-2.1.1-p0
nixos.ruby_2_1_2 ruby-2.1.2-p353
nixos.ruby_2_1_3 ruby-2.1.3-p0
nixos.ruby_2_1 ruby-2.1.6-p0
nixos.ruby_2_2_0 ruby-2.2.0-p0
nixos.ruby ruby-2.2.2-p0
nixos.bundler_HEAD ruby-2.2.2-p0-bundler-2015-01-11
By looking at the definition of ruby package in the index, you can see that the current default ruby is just an alias to ruby 2.2:
ruby = ruby_2_2;
that is in turn an alias to ruby 2.2.2:
ruby_2_2 = ruby_2_2_2;
To override the ruby package to a specific ruby version in a nix expression, overridePackages can be used:
let
nixpkgs = import <nixpkgs> {};
pkgs = nixpkgs.overridePackages (self: super: {
ruby = nixpkgs.ruby_2_2_2;
});
in with pkgs;
{
rubyEnv = stdenv.mkDerivation rec {
name = "ruby-env";
version = "0.1";
src = ./.;
buildInputs = [
stdenv
ruby
bundler
];
};
}
Related
I'd like to use the 'faker' library to generate fake data in JSON file as below.
In karate-config.js, I do the following:
var faker = require('faker');
In sample.json:
{
'firstName': '#(faker.name.firstName)'
'city' : '#(faker.address.city)'
}
But I'm getting error like 'unable to find 'require' keyword in 'karate-config.js'
Please help on this.
Karate does not support NPM or the require keyword. For complex custom logic, the recommendation is to use Java interop.
My suggestion is you should find a Java library that does what "faker" does and integrate it.
First add the below dependency in your pom.xml
<dependency>
<groupId>com.github.javafaker</groupId>
<artifactId>javafaker</artifactId>
<version>1.0.2</version>
</dependency>
For the latest version of the dependency click here
Use the below code in karate-config.js:
config.faker = Java.type('com.github.javafaker.Faker');
In the feature file use the below code:
* def fakerObj = new faker()
* def fName = fakerObj.name().firstName()
* def lName = fakerObj.name().lastName()
* def mailId = fName+'.'+lName+'#test.com'
You could use the same in JSON body as follows:
"emailAddress":"#(mailId)",
"firstName":"#(fName)",
"lastName":"#(lName)",
"address":{
line1:"#(fakerObj.address().streetAddress())"}
Please click here for the class and methods of faker package
As far as I know, karate only supports java based dependencies. So try to find a Java equivalent for your JS library.
thanks for your response and suggestion, tried below and working fine.
in karate-config.js:
var faker = Java.type('.FakerClass');
......
config.faker = faker;
in sample.json:
{ 'name': '#(faker.address.city)' }
I got an issue when develop an export feature using aspose on .net core platform. The issue is that some of the rows data exported as PDF is being cut-off.
I tried on my local environment (windows) there are no issue. The result is as good as I expected.
But when I publish the code into host environment (linux) the result seems off.
My local result (windows)
My hosted result (linux)
Here is piece of code that I am using to generate the file
public FileData ExportToFile(DataTable data, string format, string query)
{
var workbook = new Workbook();
var style = workbook.CreateStyle();
workbook.DefaultStyle = style;
var sheet = workbook.Worksheets[0];
AutoFitterOptions oAutoFitterOptions = new AutoFitterOptions { AutoFitMergedCells = true, OnlyAuto = true };
data = _RemoveFormat(data); // Parse all data to String
/**/
sheet.PageSetup.Orientation = PageOrientationType.Landscape;
PdfSaveOptions pdfSaveOptions = new PdfSaveOptions();
pdfSaveOptions.AllColumnsInOnePagePerSheet = true;
sheet.AutoFitRows(oAutoFitterOptions);
sheet.AutoFitColumns(oAutoFitterOptions);
workbook.Save(stream, pdfSaveOptions);
/**/
}
After discussing with Aspose support. I found out that the differences caused by that missing font from my Hosted Server (linux). So I need to install it on the server to get both document have the same style.
In my case, the my app was hosted in kubernetes cluster, so I need to add this line of commands in the dockerfile
RUN echo ttf-mscorefonts-installer msttcorefonts/accepted-mscorefonts-eula select true | debconf-set-selections
RUN apt-get update \
&& apt-get install -y --allow-unauthenticated \
fontconfig \
ttf-mscorefonts-installer
How do you set cookie in Geb ? I'm running into the following error with the given example:
org.openqa.selenium.InvalidCookieDomainException: {"errorMessage":"Can only set Cookies for the current domain" ....
.. Ive also tried explicitly setting the cookie domino using the Cookie Builder though that only cause another exception : org.openqa.selenium.UnableToSetCookieException: {"errorMessage":"Unable to set Cookie"}
Note that I used to have a baseURL in the GebConfig.groovy file .. but I have removed it as well .. Other then PhantomJS driver config, there are no settings in the config file.
I'm on OSX and using PhantomJS latest version (1.3.0 jar, and 2.1.1 driver OSX).
Note the example DOES work using the Chrome Webdriver for some reason.
import geb.spock.GebSpec
import org.openqa.selenium.Cookie
class SetCookieIT extends GebSpec {
def "Cookie example"() {
given:
def options = driver.manage()
when:
go "https://www.wikipedia.org/"
then:
!options.getCookieNamed("my-geb-cookie")
when:
options.addCookie(new Cookie("my-geb-cookie", "foobar"))
go "https://www.wikipedia.org/"
then:
title == "Wikipedia"
options.getCookieNamed("my-geb-cookie").value == "foobar"
}
}
Wikipedia is not spelt with an "ie" in the domain name and "org.com" also looks very strange. Maybe next time you want to provide an example which is actually executeable and does something meaningful. :-7
For me this works nicely:
package de.scrum_master.stackoverflow
import geb.spock.GebReportingSpec
import org.openqa.selenium.Cookie
class SetCookieIT extends GebReportingSpec {
def "Cookie example"() {
given:
def options = driver.manage()
when:
go "https://www.wikipedia.org/"
then:
!options.getCookieNamed("my-geb-cookie")
when:
options.addCookie(new Cookie("my-geb-cookie", "foobar"))
go "https://www.wikipedia.org/"
then:
title == "Wikipedia"
options.getCookieNamed("my-geb-cookie").value == "foobar"
}
}
If you have any further problems, please update your question and provide an SSCCE reproducing the actual problem.
Update after the question was modified: The problem with PhantomJS is that it refuses to create cookies if you do not explicitly specify the domain. This works:
options.addCookie(new Cookie("my-geb-cookie", "foobar", ".wikipedia.org", "/", null))
I am trying to parse xml using phantomjs for the following file, documentpreviewer1.js
var webPage = require('webpage');
var page = webPage.create();
var url = "http://xxx/sitemap.xml";
page.open(url, function(status){
if(status != 'success'){
console.log('Unable to access cfc');
}
else
{
var xml = page.content;
var libxmljs = require("libxmljs");
var xmlDoc = libxmljs.parseXml(xml);
var url1 = xmlDoc.get('//urlset/url[0]/loc');
console.log(url1);
}
});
when I run the above code, I get the following error
cmd sudo phantomjs documentpreivewer1.js
Error: Cannot find module 'libxmljs'
phantomjs://bootstrap.js:289
phantomjs://bootstrap.js:254 in require
documentpreivewer1.js:13
:/modules/webpage.js:281
libxmljs is a node.js module. Although phantomjs can be installed through npm (doesn't need to be) it is not a node.js module. It doesn't share any built-in module with node.js (fs seems the same, but is not equal to node.js fs).
You can use some node.js modules in phantomjs (See Use a node module from casperjs for a related question), but it doesn't seem like you could use libxmljs in phantomjs, because it depends on node-bindings which uses fs and path modules. You will have to change the implementation so that all dependencies can be expressed with phantomjs capabilities.
An alternative might be to use phantom-node or spookyjs for a casperjs node.js module.
I would like to define different database connections for multiple test environments(Production, Staging, Development). After reading the post "How do I specify a config file with sbt 0.12.2 for sbt test?" it seems that it was possible in earlier versions of Play, by using the follwing SBT setting:
val main = play.Project(appName, appVersion, appDependencies).settings(
javaOptions in Test += "-Dconfig.file=conf/test.conf"
)
But if I use this setting in my Build.scala, I get the following error:
not found: value javaOptions
So my question is, how can I define different connections for different test environments?
Edit:
A possible workaround would be to override the default setting during testing. This can be done with a environment variable.
object Config {
var defaultConfig = Map(
"db.default.user" -> "user",
"db.default.pass" -> "************"
)
def additionalConfiguration(): Map[String, _] = sys.env.getOrElse("PLAY_TEST_SCOPE", "") match {
case "development" => {
defaultConfig += "db.default.url" -> "jdbc:mysql://host:3306/development"
defaultConfig
}
case "staging" => {
defaultConfig += "db.default.url" -> "jdbc:mysql://host:3306/staging"
defaultConfig
}
case "production" => {
defaultConfig += "db.default.url" -> "jdbc:mysql://host:3306/production"
defaultConfig
}
case _ => {
throw new Exception("Environment variable `PLAY_TEST_SCOPE` isn't defined")
}
}
}
And then running a fake application with this configuration.
FakeApplication(additionalConfiguration = Config.additionalConfiguration())
javaOptions is contained within the Keys object.
Make sure that you use the proper import in your Build.scala file:
import Keys._
we can mix the above solutions, to pass the config file as a parameter to sbt.
This will be useful to integrate the test in CI pipeline
First, in the Build.scala file
val testOptions = "-Dconfig.file=conf/" + Option(System.getProperty("test.config")).getOrElse("application") + ".conf"
val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings(
javaOptions in Test += testOptions
)
Then, in the command line to run the test with integ.conf
sbt -Dtest.config=integ test
to use the default application.conf
sbt test
Update for Play 2.5.x
Explicit import for import Keys._ is no longer required, and the vm param for config resource location has changed.
javaOptions in Test += "-Dconfig.resource=<conf_name>.conf"
You can run your application from console with alternative config file, anyway you need to use -Dconfig.file with full path as there are some problems ... that I can't realize with other option. For an example in unix env:
play -Dconfig.file=/home/akkie/play/some-project/conf/local_akkie_dev.conf "~run 9123"
Of course for easier launching you can create bash script for calling this line.
Edit: Note that you don't need to write whole config in each additional config file, as you can just include your main config at beginning and then overwrite only required properties :
include "application.conf"
key.to.override=blah
Take a look to the official doc