I've been following this particular answer and created respective symbolic links und /usr/local/cuda/lib64 and /usr/lib/x86_64-linux-gnu for the missing files but I keep getting the warning
[.. W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcupti.so.11.2';
GPU training is working but I am not able to use the Tensorboard/Profiler without this library. On the Tensorboard Profile tab I am greeted by the follwoing error message:
Failed to load libcupti (is it installed and accessible?)
Full log (relevant part)
2022-05-01 11:39:36.945508: I tensorflow/core/profiler/lib/profiler_session.cc:110] Profiler session initializing.
2022-05-01 11:39:36.945528: I tensorflow/core/profiler/lib/profiler_session.cc:125] Profiler session started.
2022-05-01 11:39:36.945557: I tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1630] Profiler found 1 GPUs
2022-05-01 11:39:36.945763: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcupti.so.11.2'; dlerror: libcupti.so.11.2: cannot open shared object file: No such file or directory
2022-05-01 11:39:36.945828: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcupti.so'; dlerror: libcupti.so: cannot open shared object file: No such file or directory
2022-05-01 11:39:36.945835: E tensorflow/core/profiler/internal/gpu/cupti_error_manager.cc:135] cuptiGetTimestamp: error 999:
2022-05-01 11:39:36.945844: E tensorflow/core/profiler/internal/gpu/cupti_error_manager.cc:184] cuptiSubscribe: ignored due to a previous error.
2022-05-01 11:39:36.945848: E tensorflow/core/profiler/internal/gpu/cupti_error_manager.cc:457] cuptiGetResultString: ignored due to a previous error.
2022-05-01 11:39:36.945853: E tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1682] function cupti_interface_->Subscribe( &subscriber_, (CUpti_CallbackFunc)ApiCallback, this)failed with error
2022-05-01 11:39:36.945869: I tensorflow/core/profiler/lib/profiler_session.cc:143] Profiler session tear down.
2022-05-01 11:39:36.945881: E tensorflow/core/profiler/internal/gpu/cupti_error_manager.cc:140] cuptiFinalize: ignored due to a previous error.
2022-05-01 11:39:36.945884: E tensorflow/core/profiler/internal/gpu/cupti_error_manager.cc:457] cuptiGetResultString: ignored due to a previous error.
2022-05-01 11:39:36.945888: E tensorflow/core/profiler/internal/gpu/cupti_tracer.cc:1773] function cupti_interface_->Finalize()failed with error
Related
I am running my hive query on EMR cluster that which is 25 nodes cluster and i have used r4.4xlarge in stances to run this .
When i run my query i get below error .
Job Commit failed with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Not Found (Service: Amazon S3; Status Code: 404; Error Code: 404 Not Found; Request ID: FEAF40B78D086BEE; S3 Extended Request ID: yteHc4bRl1MrmVhqmnzm06rdzQNN8VcRwd4zqOa+rUY8m2HC2QTt9GoGR/Qu1wuJPILx4mchHRU=), S3 Extended Request ID: yteHc4bRl1MrmVhqmnzm06rdzQNN8VcRwd4zqOa+rUY8m2HC2QTt9GoGR/Qu1wuJPILx4mchHRU=)'
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.tez.TezTask
/mnt/var/lib/hadoop/steps/s-10YQZ5Z5PRUVJ/./hive-script:617:in `<main>': Error executing cmd: /usr/share/aws/emr/scripts/hive-script "--base-path" "s3://us-east-1.elasticmapreduce/libs/hive/" "--hive-versions" "latest" "--run-hive-script" "--args" "-f" "s3://205067-pcfp-app-stepfun-s3appbucket-qa/2019-02-22_App/d77a6a82-26f4-4f06-a1ea-e83677256a55/01/DeltaOutPut/processing/Scripts/script.sql" (RuntimeError)
Command exiting with ret '1'
I have tried settings all king of HIVE parameter combinations like below
emrfs-site fs.s3.consistent.retryPolicyType exponential
emrfs-site fs.s3.consistent.metadata.tableName EmrFSMetadataAlt
emrfs-site fs.s3.consistent.metadata.write.capacity 300
emrfs-site fs.s3.consistent.metadata.read.capacity 600
emrfs-site fs.s3.consistent true
hive-site hive.exec.stagingdir .hive-staging
hive-site hive.tez.java.opts -Xmx47364m
hive-site hive.stats.fetch.column.stats true
hive-site hive.stats.fetch.partition.stats true
hive-site hive.vectorized.execution.enabled false
hive-site hive.vectorized.execution.reduce.enabled false
hive-site tez.am.resource.memory.mb 15000
hive-site hive.auto.convert.join false
hive-site hive.compute.query.using.stats true
hive-site hive.cbo.enable true
hive-site tez.task.resource.memory.mb 16000
But every time it failed .
I tried increasing the no of nodes/bigger instances in the EMR cluster but result is still same .
I also tried with and without Tez but still did not worked for me .
Here is my sample query .I am just copying the part of my query
insert into filediffPcfp.TableDelta
Select rgt.FILLER1,rgt.DUNSNUMBER,rgt.BUSINESSNAME,rgt.TRADESTYLENAME,rgt.REGISTEREDADDRESSINDICATOR
Please help me identify the issue .
Adding full yarn logs
2019-02-26 06:28:54,318 [INFO] [TezChild] |exec.FileSinkOperator|: Final Path: FS s3://205067-pcfp-app-stepfun-s3appbucket-qa/2019-02-26_App/d996dfaa-1a62-4062-9350-d0c2bd62e867/01/DeltaOutPut/processing/Delta/.hive-staging_hive_2019-02-26_06-15-00_804_541842212852799084-1/_tmp.-ext-10000/000000_1
2019-02-26 06:28:54,319 [INFO] [TezChild] |exec.FileSinkOperator|: Writing to temp file: FS s3://205067-pcfp-app-stepfun-s3appbucket-qa/2019-02-26_App/d996dfaa-1a62-4062-9350-d0c2bd62e867/01/DeltaOutPut/processing/Delta/.hive-staging_hive_2019-02-26_06-15-00_804_541842212852799084-1/_task_tmp.-ext-10000/_tmp.000000_1
2019-02-26 06:28:54,319 [INFO] [TezChild] |exec.FileSinkOperator|: New Final Path: FS s3://205067-pcfp-app-stepfun-s3appbucket-qa/2019-02-26_App/d996dfaa-1a62-4062-9350-d0c2bd62e867/01/DeltaOutPut/processing/Delta/.hive-staging_hive_2019-02-26_06-15-00_804_541842212852799084-1/_tmp.-ext-10000/000000_1
2019-02-26 06:28:54,681 [INFO] [TezChild] |exec.FileSinkOperator|: FS[11]: records written - 1
2019-02-26 06:28:54,877 [INFO] [TezChild] |exec.MapOperator|: MAP[0]: records read - 1000
2019-02-26 06:28:56,632 [INFO] [TezChild] |exec.MapOperator|: MAP[0]: records read - 10000
2019-02-26 06:29:13,301 [INFO] [TezChild] |exec.MapOperator|: MAP[0]: records read - 100000
2019-02-26 06:31:59,207 [INFO] [TezChild] |exec.MapOperator|: MAP[0]: records read - 1000000
2019-02-26 06:34:42,686 [INFO] [TaskHeartbeatThread] |task.TaskReporter|: Received should die response from AM
2019-02-26 06:34:42,686 [INFO] [TaskHeartbeatThread] |task.TaskReporter|: Asked to die via task heartbeat
2019-02-26 06:34:42,687 [INFO] [TaskHeartbeatThread] |task.TezTaskRunner2|: Attempting to abort attempt_1551161362408_0001_7_01_000000_1 due to an invocation of shutdownRequested
2019-02-26 06:34:42,687 [INFO] [TaskHeartbeatThread] |tez.TezProcessor|: Received abort
2019-02-26 06:34:42,687 [INFO] [TaskHeartbeatThread] |tez.TezProcessor|: Forwarding abort to RecordProcessor
2019-02-26 06:34:42,687 [INFO] [TaskHeartbeatThread] |tez.MapRecordProcessor|: Forwarding abort to mapOp: {} MAP
2019-02-26 06:34:42,687 [INFO] [TaskHeartbeatThread] |exec.MapOperator|: Received abort in operator: MAP
2019-02-26 06:34:42,705 [INFO] [TezChild] |s3.S3FSInputStream|: Encountered exception while reading '2019-02-26_App/d996dfaa-1a62-4062-9350-d0c2bd62e867/01/IncrFile/WB.ACTIVE.OCT17_01_OF_10.gz', will retry by attempting to reopen stream.
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.AbortedException:
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.internal.SdkFilterInputStream.abortIfNeeded(SdkFilterInputStream.java:53)
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:81)
at com.amazon.ws.emr.hadoop.fs.s3n.InputStreamWithInfo.read(InputStreamWithInfo.java:173)
at com.amazon.ws.emr.hadoop.fs.s3.S3FSInputStream.read(S3FSInputStream.java:136)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.hadoop.io.compress.DecompressorStream.getCompressedData(DecompressorStream.java:179)
at org.apache.hadoop.io.compress.DecompressorStream.decompress(DecompressorStream.java:163)
at org.apache.hadoop.io.compress.DecompressorStream.read(DecompressorStream.java:105)
at java.io.InputStream.read(InputStream.java:101)
at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:182)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:218)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:176)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:255)
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:48)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:360)
at org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:79)
at org.apache.hadoop.hive.ql.io.HiveRecordReader.doNext(HiveRecordReader.java:33)
at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:116)
at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.next(TezGroupedSplitsInputFormat.java:151)
at org.apache.tez.mapreduce.lib.MRReaderMapred.next(MRReaderMapred.java:116)
at org.apache.hadoop.hive.ql.exec.tez.MapRecordSource.pushRecord(MapRecordSource.java:62)
Switch from Tez mode to MR. It should start working. Also remove all the tez related properties.
set hive.execution.engine=spark;
Let me answer my own question .
First thing very important that we have noticed while running HIVE jobs on EMR is that STEP error is misleading vertex failed will not point you in the correct direction .
So better to check for hive logs.
Now if our instance is terminated, then we can not log into master instance and see the logs , in that case we have to look for nodes application logs .
Here is how we can find that nodes logs .
Get the master instance id something like this (i-04d04d9a8f7d28fd1) and with that search for in nodes .
Then open below path
/applications/hive/user/hive/hive.log.gz
Here you can find the expected error .
Also we have to look for the containers logs for the failed nodes ,failed nodes details can be found in master instance node.
hadooplogs/j-25RSD7FFOL5JB/node/i-03f8a646a7ae97aae/daemons/
This daemons nodes logs can be found only if cluster is running else after terminating the cluster EMR does not pushes the logs into S3 log uri .
When i looked at it i got the real reason why it was failing .
For me this was the reason of failure
On checking the master instance's instance-controller logs, i saw there were multiple core instances went into un-healthy state :
2019-02-27 07:50:03,905 INFO Poller: InstanceJointStatusMap contains 21 entries (R:21):
i-0131b7a6abd0fb8e7 1541s R 1500s ig-28 ip-10-97-51-145.tr-fr-nonprod.aws-int.thomsonreuters.com I: 18s Y:U 81s c: 0 am: 0 H:R 0.6%Yarn unhealthy Reason : 1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers
i-01672279d170dafd3 1539s R 1500s ig-28 ip-10-97-54-69.tr-fr-nonprod.aws-int.thomsonreuters.com I: 16s Y:R 79s c: 0 am:241664 H:R 0.7%
i-0227ac0f0932bd0b3 1539s R 1500s ig-28 ip-10-97-51-197.tr-fr-nonprod.aws-int.thomsonreuters.com I: 16s Y:R 79s c: 0 am:241664 H:R 4.1%
i-02355f335c190be40 1544s R 1500s ig-28 ip-10-97-52-150.tr-fr-nonprod.aws-int.thomsonreuters.com I: 22s Y:R 84s c: 0 am:241664 H:R 0.2%
i-024ed22b6affdd5ec 1540s R 1500s ig-28 ip-10-97-55-123.tr-fr-nonprod.aws-int.thomsonreuters.com I: 16s Y:U 79s c: 0 am: 0 H:R 0.6%Yarn unhealthy Reason : 1/1 local-dirs are bad: /mnt/yarn; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers
Also after some time yarn Blacklisted the Core instances:
2019-02-27 07:46:39,676 INFO Poller: Determining health status for App Monitor: aws157.instancecontroller.apphealth.monitor.YarnMonitor
2019-02-27 07:46:39,688 INFO Poller: SlaveRecord i-0ac26bd7886fec338 changed state from RUNNING to BLACKLISTED
2019-02-27 07:47:13,695 INFO Poller: SlaveRecord i-0131b7a6abd0fb8e7 changed state from RUNNING to BLACKLISTED
2019-02-27 07:47:13,695 INFO Poller: Update SlaveRecordDbRow for i-0131b7a6abd0fb8e7 ip-10-97-51-145.tr-fr-nonprod.aws-int.thomsonreuters.com
2019-02-27 07:47:13,696 INFO Poller: SlaveRecord i-024ed22b6affdd5ec changed state from RUNNING to BLACKLISTED
2019-02-27 07:47:13,696 INFO Poller: Update SlaveRecordDbRow for i-024ed22b6affdd5ec ip-10-97-55-123.tr-fr-nonprod.aws-int.thomsonreuters.com
On checking the instance nodes instance-controller logs, I can see the /mnt got full due to job caching and usage went beyond threshold i.e 90% by default.
Because of this yarn :
2019-02-27 07:40:52,231 INFO dsm-1: /mnt total 27633 MB free 2068 MB used 25565 MB
2019-02-27 07:40:52,231 INFO dsm-1: / total 100663 MB free 97932 MB used 2731 MB
2019-02-27 07:40:52,231 INFO dsm-1: cycle 17 /mnt/var/log freeSpaceMb: 2068/27633 MB freeRatio:0.07
2019-02-27 07:40:52,248 INFO dsm-1: /mnt/var/log stats :
-> As in my dataset, source table is having .gz compression. As .gz compressed files are non-splitable due to this 1 file is having 1 map task assigned to it. And as the map task will decompress the file in /mnt, it may also lead to that issue.
-> Processing a large amount of data in EMR needs some hive properties to be optimized. Below are the few optimization property can be set in the cluster to make the query run in a better way.
V.V.V.V.V.I
Increase the EBS volume size for Core instances
Important is that we have to increase the EBS voulume for each core not alone for the master because EBS volume is where /mnt gets mounted not on the route .
This alone has solved my problem but below configuration also helped me optimize the HIVE jobs
hive-site.xml
-------------
"hive.exec.compress.intermediate" : "true",
"hive.intermediate.compression.codec" : "org.apache.hadoop.io.compress.SnappyCodec",
"hive.intermediate.compression.type" : "BLOCK"
yarn-site.xml
-------------
"max-disk-utilization-per-disk-percentage" : "99"
And this has resolved my issue permanently .
Hope some one will get benefited with my answer
I installed the GCG and built the soft link with hMETIS in the scipoptsuite/gcg directory:
ubuntu18:~/Documents/Software/scipoptsuite-6.0.0/gcg$ ln -s /home/yang/Documents/Software/scipoptsuite-6.0.0/hmetis-2.0pre1/Linux-x86_64/hmetis2.0pre1 hmetis
When I run the Example test of /check/instances/cs/TEST0055.lp, there are some differences compared to logs of http://gcg.or.rwth-aachen.de/doc/EXAMPLE.html which use the same TEST0055.lp document:
Presolving Time: 0.01
start creating seeedpool for current problem
created seeedpool for current problem, n detectors: 25
Consclassifier "nonzeros" yields a classification with 2 different constraint classes
Consclassifier "constypes" yields a classification with 2 different constraint classes
Consclassifier "constypes according to miplib" yields a classification with 2 different constraint classes
Consclassifier "constypes according to miplib" is not considered since it offers the same structure as "constypes" consclassifier
Varclassifier "vartypes" yields a classification with 2 different variable classes
Varclassifier "varobjvals" yields a classification with 2 different variable classes
Varclassifier "varobjvalsigns" yields a classification with 2 different variable classes
Varclassifier "varobjvalsigns" is not considered since it offers the same structure as "varobjvals"
Begin of detection round 0 of 1 total rounds
Start to propagate seeed with id 1 (0 of 1 in round 0)
in dec_consclass: there are 2 different constraint classes
the current constraint classifier "nonzeros" consists of 2 different classes
the current constraint classifier "constypes" consists of 2 different classes
dec_consclass found 6 new seeeds
dec_densemasterconss found 1 new seeed
sh: 1: zsh: not found
[src/dec_hrgpartition.cpp:314] ERROR: Calling hmetis unsuccessful! See the above error message for more details.
[src/dec_hrgpartition.cpp:315] ERROR: Call was zsh -c "hmetis gcg-r-1.metis.1l488e 20 -seed 1 -ptype rb -ufactor 5.000000 > /dev/null"
sh: 1: zsh: not found
[src/dec_hrgpartition.cpp:314] ERROR: Calling hmetis unsuccessful! See the above error message for more details.
[src/dec_hrgpartition.cpp:315] ERROR: Call was zsh -c "hmetis gcg-r-1.metis.1l488e 10 -seed 1 -ptype rb -ufactor 5.000000 > /dev/null"
sh: 1: zsh: not found
[src/dec_hrgpartition.cpp:314] ERROR: Calling hmetis unsuccessful! See the above error message for more details.
[src/dec_hrgpartition.cpp:315] ERROR: Call was zsh -c "hmetis gcg-r-1.metis.1l488e 29 -seed 1 -ptype rb -ufactor 5.000000 > /dev/null"
Detecting Arrowhead structure: 20 10 29 done, 0 seeeds found.
Start finishing of partial decomposition 1.
The objective value is same as the example on GCG's web. But solutions are different.
Why do these errors appear? Is there anything wrong with the GCG or SCIP software? Another special issue is that: the number of Solving Nodes is only '1' on my test, however this number is '82' on the http://gcg.or.rwth-aachen.de/doc/EXAMPLE.html example. I also run the instance of 'bpp/N1C1W4_M.BPP.lp', and above errors happen also.
Begin of detection round 0 of 1 total rounds
Start to propagate seeed with id 39 (0 of 1 in round 0)
in dec_consclass: there are 1 different constraint classes
the current constraint classifier "nonzeros" consists of 2 different classes
dec_consclass found 3 new seeeds
dec_densemasterconss found 1 new seeed
sh: 1: zsh: not found
[src/dec_hrgpartition.cpp:314] ERROR: Calling hmetis unsuccessful! See the above error message for more details.
[src/dec_hrgpartition.cpp:315] ERROR: Call was zsh -c "hmetis gcg-r-39.metis.wDKr6U 50 -seed 1 -ptype rb -ufactor 5.000000 > /dev/null"
sh: 1: zsh: not found
[src/dec_hrgpartition.cpp:314] ERROR: Calling hmetis unsuccessful! See the above error message for more details.
[src/dec_hrgpartition.cpp:315] ERROR: Call was zsh -c "hmetis gcg-r-39.metis.wDKr6U 51 -seed 1 -ptype rb -ufactor 5.000000 > /dev/null"
Detecting Arrowhead structure: 50 51 done, 0 seeeds found.
And it is strange that the number of Solving Nodes is still 1.
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 0.72
Solving Nodes : 1
Primal Bound : +4.10000000000000e+01 (3 solutions)
Dual Bound : +4.10000000000000e+01
Gap : 0.00 %
Pig exists with exit code 7 after printing these 3 lines:
2014-07-16 21:57:37,271 [main] INFO org.apache.pig.Main - Apache Pig version 0.11.0-cdh4.6.0 (rexported) compiled Feb 26 2014, 03:01:22
2014-07-16 21:57:37,272 [main] INFO org.apache.pig.Main - Logging error messages to: ..../pig_1405562257268.log
2014-07-16 21:57:37,627 [main] INFO org.apache.pig.impl.util.Utils - Default bootup file /home/sam/.pigbootup not found
what does this mean?
The INFO messages are normal
The only unusual bit is the exit code (7, see above)
The pig_*.log file does not exist
Is this documented somewhere?
EDIT: the problem was eliminated when I removed the semicolon from the end of the %declare line.
go figure...
You may take a look at the return codes in the source code.
The book Programming Pig also contains a list of their meaning in chapter two.
I copy them here for reference:
0 Success
1 Retriable failure
2 Failure
3 Partial failure - Used with multiquery; see “Nonlinear Data Flows”
4 Illegal arguments passed to Pig
5 IOException thrown - Would usually be thrown by a UDF
6 PigException thrown - Usually means a Python UDF raised an exception
7 ParseException thrown (can happen after parsing if variable substitution
is being done)
8 Throwable thrown (an unexpected exception)
We are trying to test the co-simulation options of Dymola and created a fmu-file. We installed/built the FMILibrary-2.0b2 and FMUChecker-2.0b1 from www.fmi-standard.org.
I encountered an issue while trying to run the FMUChecker (fmuCheck.linux32) of a fmu-file my colleague created with Dymola. Wenn i create with my Dymola-license an fmu-file from the same Dymola model this issue is not reproducible. Because fmuCheck.linux32 runs fine without any error messages.
My colleague can run both files without problems!
As it is our goal to use this option for co-simulation i tried to run the fmu file on a pc without Dymola and again i got the same error with both my fmu-copy and the one my colleague created.
Here's the Error Message
fmuCheck.linux32 PemFcSysLib_Projects_Modl_SimCoolCirc.fmu
[INFO][FMUCHK] Will process FMU PemFcSysLib_Projects_Modl_SimCoolCirc.fmu
[INFO][FMILIB] XML specifies FMI standard version 1.0
[INFO][FMI1XML] Processing implementation element (co-simulation FMU detected)
[INFO][FMUCHK] Model name: PemFcSysLib.Projects.Modl.SimCoolCirc
[INFO][FMUCHK] Model identifier: PemFcSysLib_Projects_Modl_SimCoolCirc
[INFO][FMUCHK] Model GUID: {6eba096a-a778-4cf1-a7c2-3bd6121a1a52}
[INFO][FMUCHK] Model version:
[INFO][FMUCHK] FMU kind: CoSimulation_StandAlone
[INFO][FMUCHK] The FMU contains:
18 constants
1762 parameters
26 discrete variables
281 continuous variables
0 inputs
0 outputs
2087 internal variables
0 variables with causality 'none'
2053 real variables
0 integer variables
0 enumeration variables
34 boolean variables
0 string variables
[INFO][FMUCHK] Printing output file header
time
[INFO][FMILIB] Loading 'linux32' binary with 'standard32' platform types
[INFO][FMUCHK] Version returned from FMU: 1.0
[FMU][FMU status:OK]
...
[FMU][FMU status:OK]
[FMU][FMU status:Error] fmiInitialize: dsblock_ failed, QiErr = 1
[FMU][FMU status:Error] Unless otherwise indicated by error messages, possible errors are (non-exhaustive):
1. The license file was not found. Use the environment variable "DYMOLA_RUNTIME_LICENSE" t
[FATAL][FMUCHK] Failed to initialize FMU for simulation (FMU status: Error)
[FATAL][FMUCHK] Simulation loop terminated at time 0 since FMU returned status: Error
FMU check summary:
FMU reported:
2 warning(s) and error(s)
Checker reported:
0 Warning(s)
0 Error(s)
Fatal error occured during processing
I think a fmu-file shouldn't need a Dymola license to be simulated, therefore i can't see the reason this simulation failed.
What could be the reason for this strange behaviour?
Partially this is the same Error Message of this Issue
Initialization of a Dymola FMU in Simulink
Any suggestions are much appreciated. Thank you.
It seems that dymola has not set the path variable to the license-file in ubuntu. We have done this manually by adding the following lines to .bashrc
# Dymola runtime license, path
DYMOLA_RUNTIME_LICENSE=$HOME/.dynasim/dymola.lic
export DYMOLA_RUNTIME_LICENSE
now we can simulate each others fmu-files!
Whether an exported FMU requires a license depends on whether the copy of Dymola that exported the FMU had the "Binary Export" feature. The bottom line is that if you want unencumbered FMUs from Dymola, you have to pay for an extra licensed feature.
I have a MSBuild script that I am executing through TeamCity.
One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok.
This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors.
For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following:
My questions are:
Would it be possible that the MSBuild
script has timed out? Once the task
has completed it is after a certain
timeout period so it just fails?
Why would the build fail with no
errors and no warnings?
[05:39:06]: [Target "Obfuscate"] Finished.
[05:39:06]: [Target "Obfuscate"] Saving exception map
[05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds
[05:49:22]: [Target "Obfuscate"] Done.
[05:49:51]: MSBuild output:
Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8)
Done. (TaskId:8)
Done executing task "VeilProject" -- FAILED. (TaskId:8)
Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12)
Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED.
Project Performance Summary:
6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls
6535484 ms All 1 calls
Target Performance Summary:
156 ms PreClean 1 calls
266 ms SetBuildVersionNumber 1 calls
2406 ms CopyFiles 1 calls
6532391 ms Obfuscate 1 calls
Task Performance Summary:
16 ms MakeDir 2 calls
31 ms TeamCitySetBuildNumber 1 calls
31 ms Message 1 calls
62 ms RemoveDir 2 calls
234 ms GetAssemblyIdentity 1 calls
2406 ms Copy 1 calls
6528047 ms VeilProject 1 calls
Build FAILED.
0 Warning(s)
0 Error(s)
Time Elapsed 01:48:57.46
[05:49:52]: Process exit code: 1
[05:49:55]: Build finished
If the .exe is not returning standard exit codes then you may want to specify to ignore the exit code when using the Exec task with IgnoreExitCode="true". If that doesn't work then try the additional parameter IgnoreStandardErrorWarningFormat="true".