So I started a ktor project in IntelliJ it runs fine there. However, I am attempting to now have docker use the jar created from the build/libs directory and I get this error:
WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap.
Error: A JNI error has occurred, please check your installation and try again
Exception in thread "main" java.lang.NoClassDefFoundError: io/ktor/application/Application
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.privateGetMethodRecursive(Class.java:3048)
at java.lang.Class.getMethod0(Class.java:3018)
at java.lang.Class.getMethod(Class.java:1784)
at sun.launcher.LauncherHelper.validateMainClass(LauncherHelper.java:544)
at sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:526)
Caused by: java.lang.ClassNotFoundException: io.ktor.application.Application
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 7 more
This is what my ktor.Dockerfile looks like:
FROM openjdk:8-jre-alpine
ENV APPLICATION_USER ktor
RUN adduser -D -g '' $APPLICATION_USER
RUN mkdir /app
RUN chown -R $APPLICATION_USER /app
USER $APPLICATION_USER
COPY ./build/libs/website-0.0.1.jar /app/website-0.0.1.jar
WORKDIR /app
CMD ["java", "-server", "-XX:+UnlockExperimentalVMOptions", "-XX:+UseCGroupMemoryLimitForHeap", "-XX:InitialRAMFraction=2", "-XX:MinRAMFraction=2", "-XX:MaxRAMFraction=2", "-XX:+UseG1GC", "-XX:MaxGCPauseMillis=100", "-XX:+UseStringDeduplication", "-cp", "website-0.0.1.jar", "com.diracian.ApplicationKt"]
What I did first was start a ktor project, and after I got that working, I started working on my Dockerfile.
Actually, ended up breaking it up in pieces.
My file looks like this now and works:
FROM gradle:jdk10 as builder
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle build
FROM openjdk:10-jre-slim
EXPOSE 8080
COPY --from=builder /home/gradle/src/build/distributions/website-0.0.1.tar /app/
WORKDIR /app
RUN tar -xvf website-0.0.1.tar
WORKDIR /app/website-0.0.1
CMD bin/website
Related
i am trying to deploy kelclock container in openshift environment
and automatically keyclock-bootstrap.sh script will run and set
context path to "/ddu-auth" when container start.i am struggling for
this issue since last 4/5 days but dont find any solution
.appriciated ,if you could help me on this issue
if i removed bootstrap.sh file ,container will be stable ortherwise
it restarted automatically in every 2/3 min
Dockerfile for image creation
FROM jboss/keycloak-openshift
ADD keycloak-bootstrap.sh /usr/bin/
ADD openshift-entrypoint.sh /usr/bin/
USER root
RUN chmod +x /usr/bin/openshift-entrypoint.sh && \
chmod +x /usr/bin/keycloak-bootstrap.sh && \
chmod +x /opt/jboss/ddu.sh
USER 1000
EXPOSE 8080
**#keycloak-bootstrap.sh**
/opt/jboss/keycloak/bin/add-user-keycloak.sh -u admiin-p ert246yui
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server
http://localhost:8080/auth --realm master --user admin--password
ert246yui /opt/jboss/keycloak/bin/kcadm.sh update realms/master -s
sslRequired=NONE
sleep 50
/opt/jboss/keycloak/bin/jboss-cli.sh --connect
--command="/subsystem=keycloak-server/:write-attribute(name="web-context",value=ddu-auth)"
/opt/jboss/keycloak/bin/jboss-cli.sh --connect --command=:reload
**Error details**
######################################################### 13:09:54,018 INFO [org.keycloak.services] (ServerService Thread
Pool -- 73) KC-SERVICES0001: Loading config from standalone.xml or
domain.xml 13:09:54,055 INFO [org.jboss.as.server] (Thread-2)
WFLYSRV0236: Suspending server with no timeout. 13:09:54,056 INFO
[org.jboss.as.ejb3] (Thread-2) WFLYEJB0493: EJB subsystem suspension
complete 13:09:54,058 INFO [org.jboss.as.server] (Thread-2)
WFLYSRV0220: Server shutdown has been requested via an OS signal
13:09:54,062 ERROR [org.jboss.msc.service.fail] (ServerService
Thread Pool -- 73) MSC000001: Failed to start service
jboss.undertow.deployment.default-server.default-host./ddu-auth:
org.jboss.msc.service.StartException in service
jboss.undertow.deployment.default-server.default-host./ddu-auth:
java.lang.RuntimeException: RESTEASY003325: Failed to construct
public
org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at
org.wildfly.extension.undertow.deployment.UndertowDeploymentService$1.run(UndertowDeploymentService.java:81)
at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at
org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35)
at
org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1487)
at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378)
at java.lang.Thread.run(Thread.java:748) at
org.jboss.threads.JBossThread.run(JBossThread.java:485) Caused by:
java.lang.RuntimeException: RESTEASY003325: Failed to construct
public
org.keycloak.services.resources.KeycloakApplication(javax.servlet.ServletContext,org.jboss.resteasy.core.Dispatcher)
at
org.jboss.resteasy.core.ConstructorInjectorImpl.construct(ConstructorInjectorImpl.java:162)
at
org.jboss.resteasy.spi.ResteasyProviderFactory.createProviderInstance(ResteasyProviderFactory.java:2676)
at
org.jboss.resteasy.spi.ResteasyDeployment.createApplication(ResteasyDeployment.java:361)
at
org.jboss.resteasy.spi.ResteasyDeployment.startInternal(ResteasyDeployment.java:274)
at
org.jboss.resteasy.spi.ResteasyDeployment.start(ResteasyDeployment.java:86)
at
org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.init(ServletContainerDispatcher.java:119)
at
org.jboss.resteasy.plugins.server.servlet.HttpServletDispatcher.init(HttpServletDispatcher.java:36)
at
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:117)
at
org.wildfly.extension.undertow.security.RunAsLifecycleInterceptor.init(RunAsLifecycleInterceptor.java:78)
at
io.undertow.servlet.core.LifecyleInterceptorInvocation.proceed(LifecyleInterceptorInvocation.java:103)
at
io.undertow.servlet.core.ManagedServlet$DefaultInstanceStrategy.start(ManagedServlet.java:300)
at
io.undertow.servlet.core.ManagedServlet.createServlet(ManagedServlet.java:140)
at
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:584)
at
io.undertow.servlet.core.DeploymentManagerImpl$2.call(DeploymentManagerImpl.java:555)
at
io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:42)
at
io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43)
at
org.wildfly.extension.undertow.security.SecurityContextThreadSetupAction.lambda$create$0(SecurityContextThreadSetupAction.java:105)
I'm trying to run a Flink job on yarn:
$ export HADOOP_CONF_DIR=/etc/hadoop/conf
$ ./${FLINK_HOME}/bin/yarn-session.sh ...
$ ./${FLINK_HOME}/bin/flink run \
/home/clsadmin/messagehub-to-s3-1.0-SNAPSHOT.jar \
--kafka-brokers ${KAFKA_BROKERS} \
...
--output-folder s3://${S3_BUCKET}/${S3_FOLDER}
The startup output shows the hadoop classpath getting added to flink:
Using the result of 'hadoop classpath' to augment the Hadoop classpath: /usr/hdp/2.6.2.0-205/hadoop/conf:/usr/hdp/2.6.2.0-205/hadoop/lib/*:/usr/hdp/2.6.2.0-205/hadoop/.//*:/usr/hdp/2.6.2.0-205/hadoop-hdfs/./:/usr/hdp/2.6.2.0-205/hadoop-hdfs/lib/*:/usr/hdp/2.6.2.0-205/hadoop-hdfs/.//*:/usr/hdp/2.6.2.0-205/hadoop-yarn/lib/*:/usr/hdp/2.6.2.0-205/hadoop-yarn/.//*:/usr/hdp/2.6.2.0-205/hadoop-mapreduce/lib/*:/usr/hdp/2.6.2.0-205/hadoop-mapreduce/.//*::mysql-connector-java.jar:/home/common/lib/dataconnectorStocator/*:/usr/hdp/2.6.2.0-205/tez/*:/usr/hdp/2.6.2.0-205/tez/lib/*:/usr/hdp/2.6.2.0-205/tez/conf
However, the job fails to start:
01/29/2018 16:11:04 Job execution switched to status RUNNING.
01/29/2018 16:11:04 Source: Custom Source -> Map -> Sink: Unnamed(1/1) switched to SCHEDULED
01/29/2018 16:11:04 Source: Custom Source -> Map -> Sink: Unnamed(1/1) switched to DEPLOYING
01/29/2018 16:11:05 Source: Custom Source -> Map -> Sink: Unnamed(1/1) switched to RUNNING
01/29/2018 16:11:05 Source: Custom Source -> Map -> Sink: Unnamed(1/1) switched to FAILED
java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.fs.s3a.S3AFileSystem
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.createHadoopFileSystem(BucketingSink.java:1196)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initFileSystem(BucketingSink.java:411)
at org.apache.flink.streaming.connectors.fs.bucketing.BucketingSink.initializeState(BucketingSink.java:355)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.tryRestoreFunction(StreamingFunctionUtils.java:178)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.restoreFunctionState(StreamingFunctionUtils.java:160)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.initializeState(AbstractUdfStreamOperator.java:96)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:259)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeOperators(StreamTask.java:694)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeState(StreamTask.java:682)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:253)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:745)
However, if I look at the classpath reported by Flink:
$ grep org.apache.hadoop.fs.s3a.S3AFileSystem /usr/hdp/2.6.2.0-205/hadoop/.//*
I can see the hadoop-aws.jar is on the classpath:
...
Binary file /usr/hdp/2.6.2.0-205/hadoop/.//hadoop-aws-2.7.3.2.6.2.0-205.jar matches
Binary file /usr/hdp/2.6.2.0-205/hadoop/.//hadoop-aws.jar matches
...
Investigating further I can see the class exists in the jar:
jar -tf /usr/hdp/2.6.2.0-205/hadoop/hadoop-aws.jar | grep org.apache.hadoop.fs.s3a.S3AFileSystem
returns
org/apache/hadoop/fs/s3a/S3AFileSystem$1.class
org/apache/hadoop/fs/s3a/S3AFileSystem$2.class
org/apache/hadoop/fs/s3a/S3AFileSystem$3.class
org/apache/hadoop/fs/s3a/S3AFileSystem$WriteOperationHelper.class
org/apache/hadoop/fs/s3a/S3AFileSystem$4.class
org/apache/hadoop/fs/s3a/S3AFileSystem.class
If I submit an example app:
./flink-1.4.0/bin/flink run \
./flink-1.4.0/examples/batch/WordCount.jar \
--input "s3://${S3_BUCKET}/LICENSE-2.0.txt" \
--output "s3://${S3_BUCKET}/license-word-count.txt"
This runs without a problem.
My jar file doesn't contain any hadoop classes:
$ jar -tf /home/clsadmin/messagehub-to-s3-1.0-SNAPSHOT.jar | grep hadoop
<<not found>>
Does any one have any idea why the Flink example runs ok, but Flink has an issue loading the S3AFileSystem with my code?
Update:
I fixed the problem by:
rm -f flink-1.4.0/lib/flink-shaded-hadoop2-uber-1.4.0.jar
Is this a safe solution?
I wanted to learn more about Fabric8, however, it is not possible to build even a very simple project. I am running it locally on a Minikube cluster.
The setup is:
Mac OS Sierra
Minikube v0.18.0
Fabric8 v0.4.122
So I have a simple Spring Boot application in the local Gogs repository. The builds are failing with this message:
/usr/bin/git checkout -f d8af29f8af7a498331a244d245fb321003ef110d
/usr/bin/git rev-list d8af29f8af7a498331a244d245fb321003ef110d # timeout=10
[Pipeline] End of Pipeline
io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred.
at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:57)
at io.fabric8.kubernetes.client.utils.HttpClientUtils.createHttpClient(HttpClientUtils.java:153)
[...]
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1949)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:302)
So I took the ca.crt from Minikube (~/minikube/ca.crt) and added it (base64-encoded) to the jenkins-git-ssh secret which gets mounted in the Jenkins pod in /var/run/secrets/kubernetes.io/serviceaccount. The next build ended with this error:
/usr/bin/git checkout -f d8af29f8af7a498331a244d245fb321003ef110d
/usr/bin/git rev-list d8af29f8af7a498331a244d245fb321003ef110d # timeout=10
[Pipeline] End of Pipeline
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://kubernetes.default/. Message: Unauthorized
.
The same happens when I use apiserver.crt from Minikube.
When using ca.pem instead I get:
Caused by: java.security.cert.CertificateException: Unable to initialize, java.io.IOException: extra data given to DerValue constructor
at sun.security.x509.X509CertImpl.<init>(X509CertImpl.java:198)
at sun.security.provider.X509Factory.engineGenerateCertificate(X509Factory.java:102)
I can access the Kubernetes API from the Jenkins pod only when adding both apiserver.crt and apiserver.key to the secret. Executing
curl -k --cert apiserver.crt --key apiserver.key https://kubernetes.default/.
is successful then - but the Jenkins build is still failing.
So Im a bit lost here. Does anybody have an idea how to continue?
Thanks and regards,
Daniel
we have a fix but it's not released yet. Details can be found https://github.com/fabric8io/fabric8/issues/6829#issuecomment-301467664 which also describes a workaround.
TL;DR you can edit the jenkins service account and remove the following lines before restarting the jenkins master pod:
-secrets:
-- name: "jenkins-git-ssh"
-- name: "jenkins-master-ssh"
-- name: "jenkins-release-gpg"
Hope that helps.
I tried to follow the instructions to build leon for MacOSX (yosemite) from the README.md file on github.
It worked well except that when I run the basic test, I get a problem with a scalaz3 library not found:
$ ./leon ./testcases/verification/sas2011-testcases/RedBlackTree.scala
java.lang.UnsatisfiedLinkError: no scalaz3 in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1865)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1122)
at z3.Z3Wrapper.loadFromJar(Z3Wrapper.java:97)
at z3.Z3Wrapper.<clinit>(Z3Wrapper.java:47)
at z3.scala.Z3Config.<init>(Z3Config.scala:6)
at leon.solvers.z3.FairZ3Solver.<init>(FairZ3Solver.scala:50)
at leon.solvers.SolverFactory$$anonfun$leon$solvers$SolverFactory$$getSolver$1$1$$anon$1.<init>(SolverFactory.scala:50)
at leon.solvers.SolverFactory$$anonfun$leon$solvers$SolverFactory$$getSolver$1$1.apply(SolverFactory.scala:50)
at leon.solvers.SolverFactory$$anonfun$leon$solvers$SolverFactory$$getSolver$1$1.apply(SolverFactory.scala:50)
at leon.solvers.SolverFactory$$anon$12.getNewSolver(SolverFactory.scala:18)
at leon.verification.AnalysisPhase$.checkVC(AnalysisPhase.scala:129)
at leon.verification.AnalysisPhase$$anonfun$10.apply(AnalysisPhase.scala:111)
at leon.verification.AnalysisPhase$$anonfun$10.apply(AnalysisPhase.scala:110)
at scala.collection.TraversableLike$WithFilter$$anonfun$map$2.apply(TraversableLike.scala:728)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$WithFilter.map(TraversableLike.scala:727)
at leon.verification.AnalysisPhase$.checkVCs(AnalysisPhase.scala:110)
at leon.verification.AnalysisPhase$.run(AnalysisPhase.scala:45)
at leon.verification.AnalysisPhase$.run(AnalysisPhase.scala:15)
at leon.Pipeline$$anon$1.run(Pipeline.scala:12)
at leon.Pipeline$$anon$1.run(Pipeline.scala:12)
at leon.Main$.execute(Main.scala:236)
at leon.Main$.main(Main.scala:220)
at leon.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at scala.reflect.internal.util.ScalaClassLoader$$anonfun$run$1.apply(ScalaClassLoader.scala:70)
at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:31)
at scala.reflect.internal.util.ScalaClassLoader$URLClassLoader.asContext(ScalaClassLoader.scala:101)
at scala.reflect.internal.util.ScalaClassLoader$class.run(ScalaClassLoader.scala:70)
at scala.reflect.internal.util.ScalaClassLoader$URLClassLoader.run(ScalaClassLoader.scala:101)
at scala.tools.nsc.CommonRunner$class.run(ObjectRunner.scala:22)
at scala.tools.nsc.ObjectRunner$.run(ObjectRunner.scala:39)
at scala.tools.nsc.CommonRunner$class.runAndCatch(ObjectRunner.scala:29)
at scala.tools.nsc.ObjectRunner$.runAndCatch(ObjectRunner.scala:39)
at scala.tools.nsc.MainGenericRunner.runTarget$1(MainGenericRunner.scala:65)
at scala.tools.nsc.MainGenericRunner.run$1(MainGenericRunner.scala:87)
at scala.tools.nsc.MainGenericRunner.process(MainGenericRunner.scala:98)
at scala.tools.nsc.MainGenericRunner$.main(MainGenericRunner.scala:103)
at scala.tools.nsc.MainGenericRunner.main(MainGenericRunner.scala)
I tried to build the ScalaZ3 package from EPFL which requires building Microsoft's Z3 (from github). Building z3 itself works find but building ScalaZ3 fails with a missing "gomp" library:
[error] ld: library not found for -lgomp
[error] clang: error: linker command failed with exit code 1 (use -v to see invocation)
[info] Bundling files:
[info] - /Users/rouquett/git.leon/ScalaZ3/lib-bin/libscalaz3.dylib -> lib-bin/libscalaz3.dylib
[info] - /Users/rouquett/git.leon/ScalaZ3/z3/4.3-osx-64b/lib/libz3.dylib -> lib-bin/libz3.dylib
[info] - /Users/rouquett/git.leon/ScalaZ3/z3/4.3-osx-64b/lib/python2.7 -> lib-bin/python2.7
[info] Packaging /Users/rouquett/git.leon/ScalaZ3/target/scala-2.10/scalaz3_2.10-2.1.jar ...
[info] Done packaging.
I found that there is a Clang OMP library here for MacOSX:
http://brewformulas.org
However, this may require tweaking some build scripts to point to brew's installation of clang-omp.
Has anyone experienced similar problems or solved them?
Nicolas.
These are the steps I followed to get the latest version of Leon running on OSX:
git clone git#github.com:epfl-lara/leon.git
cd leon
git remote add osx git#github.com:mantognini/leon.git
git fetch osx
git checkout osx
git rebase origin/master # adds precompiled OSX binaries
sbt clean compile script
Make sure to link the leon binary to your $PATH, for example after the last step run ln -sv $(pwd)/leon /usr/local/bin/leon.
To update the binary to the latest version of Leon, run
git fetch origin
git rebase origin/master
sbt clean compile script
Assuming you are on the osx branch.
`I am using android eclipse,i develop a project and add the dependencies GOOGLE play services,
app compact v7 in build path.When i run it as a normal android project it is perfectly executing,I add maven POM.XML file to generate APK file,when i proceed to maven install it gives error.
Error :
[ERROR] Error when generating sources.
org.apache.maven.plugin.MojoExecutionException:
at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR(GenerateSourcesMojo.java:608)
at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.execute(GenerateSourcesMojo.java:229)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
at org.codehaus.classworlds.Launcher.main(Launcher.java:46)
Caused by: com.jayway.maven.plugins.android.ExecutionException: ANDROID-040-001: Could not execute: Command = cmd.exe /X /C "D:\NagarjunaWork\android\adt-bundle-windows-x86_64-20140321\sdk\build-tools\android-4.4.2\aapt.exe package -f --no-crunch -I D:\NagarjunaWork\android\adt-bundle-windows-x86_64-20140321\sdk\platforms\android-19\android.jar -M D:\5-8-2014\QuickRideApp\QuickRide\AndroidManifest.xml -S D:\5-8-2014\QuickRideApp\QuickRide\res -A D:\5-8-2014\QuickRideApp\QuickRide\target\generated-sources\combined-assets -m -J D:\5-8-2014\QuickRideApp\QuickRide\target\generated-sources\r --output-text-symbols D:\5-8-2014\QuickRideApp\QuickRide\target --auto-add-overlay", Result = -1073741819
at com.jayway.maven.plugins.android.CommandExecutor$Factory$DefaultCommandExecutor.executeCommand(CommandExecutor.java:252)
at com.jayway.maven.plugins.android.phase01generatesources.GenerateSourcesMojo.generateR(GenerateSourcesMojo.java:604)
... 23 more
This can be caused by problems in your XML files, such as referring to a resource ID that no longer exists or never existed. See MojoExecutionException: Maven with Android and aapt.exe has stopped working for possible solutions.