I am getting following information in console when the test is executed by arquillian.
Apr 15, 2014 7:41:56 PM org.jboss.arquillian.protocol.jmx.JMXMethodExecutor invoke
SEVERE:Failed:com.bidis.bridge.systemlog.server.facade.SystemLogTest.testInsertSystemLog1
Apr 15, 2014 7:41:56 PM org.jboss.arquillian.protocol.jmx.JMXMethodExecutor invoke
SEVERE:Failed:com.bidis.bridge.systemlog.server.facade.SystemLogTest.testInsertSystemLog
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.272 sec - in com.bidis.bridge.systemlog.server.facade.SystemLogTest
Results :
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
The test are green. But, one of the test which is suppose to insert record in db is also green, but no records are inserted in DB.
I am not able to figure out what is happing here. Why the SEVER: Failed message is there after JMX invoke.?
Any input on this.
Thank you
Sanjeev.
We had the same issue. It was a bug in Arquillian 1.1.4 and it's fixed in 1.1.5. Simply updating helped us.
Related
I am trying to migrate my cuda application using the dpct. When calling dpct; I see it processes CUDA files and generates some benign warnings but at the end it exits without writing out any DPC++ equivalent file.I can clearly see CUDA functions called in these applications and removal of CUDA path would fail the compile process.This is the command I used
$ dpct --report-type=all --cuda-include-path=/usr/local/cuda-10.2/include -p compile_commands.json"
I have eliminated the actual physical paths to files to avoid confusion:
Processing: ....../ComputeThermoGPU.cu
Processing: ....../CommunicatorGPU.cu
Processing: ....../ParticleData.cu
Processing: ....../Integrator.cu
------------------APIS report--------------------
API name Frequency
-------------------------------------------------
----------Stats report---------------
File name, LOC migrated to DPC++, LOC migrated to helper functions, LOC not needed to migrate,
LOC not able to migrate
....../Integrator.cu, 1, 0, 168, 0
....../ParticleData.cu, 1, 0, 402, 0
....../ComputeThermoGPU.cu, 1, 0, 686, 0
....../ParticleGroup.cu, 6, 0, 111, 0
Total migration time: 17207.371000 ms
-------------------------------------
dpct exited with code: 1 (Migration not necessary)```
I could get a solution to my own question
Use
dpct -p compile_commands.json --in-root=src --out-root=dpct_out --process-all
I think the reason might be, due to the absence of the driver code(containing main function), DPCT thinks that since these helper .cu files are not being utilized anyway, there is no need to perform migration on these files. That's the reason you see "Migration not necessary" warning.
I have a large insert query with ends in an error:
Msg 8152, Level 16, State 4, Line 1
String or binary data would be truncated
After some research I tried using TRACE FLAG 460, using the command below:
INSERT...
VALUES...
OPTION (QUERYTRACEON 460);
This gave the same error as before, so I tried to turn on the flag on server-level, using the command below:
DBCC TRACEON(460, -1);
Again, no change in the output. But when I check the flagstatus it gives all the right information:
DBCC TRACESTATUS(460);
TraceFlag Status Global Session
460 1 1 0
Does anyone have a clue how I can get Trace Flag 460 working? My server information is down below:
Edition: Developer Edition (64-bit)
ProductVersion: 14.0.2037.2
ResourceLastUpdateDateTime: 2020-11-02 21:20:26.783
ResourceVersion: 14.00.2037
BuildClrVersion: v4.0.30319
Have you checked the documentation??
It clearly says:
Note: This trace flag applies to SQL Server 2017 (14.x) CU12 and higher builds
Which means SQL Server 2017 has to have a build version number of 14.0.3045.24 or higher - which you don't seem to have.
So you'll need to install at least CU12 (or better yet: the latest CU22 - https://www.microsoft.com/en-us/download/details.aspx?id=56128) on your machine for this to work
See: SQL Server 2017 build versions - for all the details about the official version numbers of SQL Server 2017 (and it's various CU's)
I'm writing some integration tests using failsafe.
There are two features like this:
Feature: example feature 1
Scenario:
Given url 'http://httpbin.org/'
When method get
Then status 200
My "suite" is:
public class ApiIT {
#Test
public void testParallel(){
Results results = Runner.path("classpath:.").tags("~#ignore").parallel(5);
assertEquals(results.getErrorMessages(), 0, results.getFailCount());
}
}
When I run integration tests using mvn (mvn clean install) I get:
Karate version: 0.9.6.RC4
======================================================
elapsed: 1.41 | threads: 5 | thread time: 1.39
features: 2 | ignored: 0 | efficiency: 0.20
scenarios: 2 | passed: 2 | failed: 0
======================================================
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.558 sec - in ApiIT
Is there any way to count the real tests so I can get this in the logs:
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.558 sec - in ApiIT
I uploaded an example project here: https://github.com/italktothewind/test-count
Nope. Ignore the last line, that is JUnit because you have 1 #Test annotation. What matters here is the Karate output. JUnit makes it simpler to call Karate. But if it bothers you so much, call the Runner using a Java main method.
The overall problem I'm trying to solve is a way to count the number of reads present in each file at every step of a QC pipeline I'm building. I have a shell script I've used in the past which takes in a directory and outputs the number of reads per file. Since I'm looking to use a directory as input, I tried following the format laid out by Rasmus in this post:
https://bitbucket.org/snakemake/snakemake/issues/961/rule-with-folder-as-input-and-output
Here is some example input created earlier in the pipeline:
$ ls -1 cut_reads/
97_R1_cut.fastq.gz
97_R2_cut.fastq.gz
98_R1_cut.fastq.gz
98_R2_cut.fastq.gz
99_R1_cut.fastq.gz
99_R2_cut.fastq.gz
And a simplified Snakefile to first aggregate all reads by creating symlinks in a new directory, and then use that directory as input for the read counting shell script:
import os
configfile: "config.yaml"
rule all:
input:
"read_counts/read_counts.txt"
rule agg_count:
input:
cut_reads = expand("cut_reads/{sample}_{rdir}_cut.fastq.gz", rdir=["R1", "R2"], sample=config["forward_reads"])
output:
cut_dir = directory("read_counts/cut_reads")
run:
os.makedir(output.cut_dir)
for read in input.cut_reads:
abspath = os.path.abspath(read)
shell("ln -s {abspath} {output.cut_dir}")
rule count_reads:
input:
cut_reads = "read_counts/cut_reads"
output:
"read_counts/read_counts.txt"
shell:
'''
readcounts.sh {input.cut_reads} >> {output}
'''
Everything's fine in the dry-run, but when I try to actually execute it, I get a fairly cryptic error message:
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 agg_count
1 all
1 count_reads
3
[Tue Jun 18 11:31:22 2019]
rule agg_count:
input: cut_reads/99_R1_cut.fastq.gz, cut_reads/98_R1_cut.fastq.gz, cut_reads/97_R1_cut.fastq.gz, cut_reads/99_R2_cut.fastq.gz, cut_reads/98_R2_cut.fastq.gz, cut_reads/97_R2_cut.fastq.gz
output: read_counts/cut_reads
jobid: 2
Job counts:
count jobs
1 agg_count
1
[Tue Jun 18 11:31:22 2019]
Error in rule agg_count:
jobid: 0
output: read_counts/cut_reads
Exiting because a job execution failed. Look above for error message
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/douglas/snakemake/scrap_directory/.snakemake/log/2019-06-18T113122.202962.snakemake.log
read_counts/ was created, but there's no cut_reads/ directory inside. No other error messages are present in the complete log. Anyone know what's going wrong or how to receive a more descriptive error message?
I'm also (obviously) fairly new to snakemake, so there might be a better way to go about this whole process. Any help is much appreciated!
... And it was a typo. Typical. os.makedir(output.cut_dir) should be os.makedirs(output.cut_dir). I'm still really curious why snakemake isn't displaying the AttributeError python throws when you try to run this:
AttributeError: module 'os' has no attribute 'makedir'
Is there somewhere this is stored or can be accessed to prevent future headaches?
Are you sure the error message is due to the typo in os.makedir? In this test script os.makedir does throw AttributeError ...:
rule all:
input:
'tmp.done',
rule one:
output:
x= 'tmp.done',
xdir= directory('tmp'),
run:
os.makedir(output.xdir)
When executed:
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
1 one
2
[Wed Jun 19 09:05:57 2019]
rule one:
output: tmp.done, tmp
jobid: 1
Job counts:
count jobs
1 one
1
[Wed Jun 19 09:05:57 2019]
Error in rule one:
jobid: 0
output: tmp.done, tmp
RuleException:
AttributeError in line 10 of /home/dario/Tritume/Snakefile:
module 'os' has no attribute 'makedir'
File "/home/dario/Tritume/Snakefile", line 10, in __rule_one
File "/home/dario/miniconda3/envs/tritume/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Exiting because a job execution failed. Look above for error message
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /home/dario/Tritume/.snakemake/log/2019-06-19T090557.113876.snakemake.log
Use f-string to resolve local variables like {abspath}:
for read in input.cut_reads:
abspath = os.path.abspath(read)
shell(f"ln -s {abspath} {output.cut_dir}")
Wrap the wildcards that snakemake resolves automatically into double braces inside of f-strings.
There's situation when my database(with memory optimized tables) has gone into 'Recovery Pending' state. I tried to put it into
emergency mode-->Single User Mode--> DBCC CHECKDB(<DBName>)--->set it online--->Multiuser mode.
But I am facing below error message while doing ONLINE mode.
Msg 5181, Level 16, State 5, Line 9 Could not restart database
"DBName". Reverting to the previous status. Msg 5069, Level 16, State
1, Line 9 ALTER DATABASE statement failed. Msg 41316, Level 23, State
3, Line 3395 Restore operation failed for database 'DBName' with
internal error code '0x0000000a'
I tried to check SQL error Log file and there's below message.
[ERROR] Database ID: [6] ''. Failed to load XTP checkpoint. Error
code: 0x88000001.
(d:\b\s2\sources\sql\ntdbms\hekaton\sqlhost\sqlmin\hkhostdb.cpp : 5288
- 'HkHostRecoverDatabaseHelper::ReportAndRaiseFailure')
And Rebuilding log file for Memory Optimized database is also not supported. Does anyone know familiar with such error?