Sentry don't upload DSYMs files to S3 filestore backend and giving me exception: NoSuchKey: An error occurred (NoSuchKey) - amazon-s3

I am using on-premise Sentry in a OpenShift.
I want to be able to use the S3 bucket to upload dsym files.
While trying to upload dsym files from the sentry-cli using the below command I am getting error:
sentry-cli upload-dif -t dsym --project service-level-reporting --log-level debug
sentry-worker log:
[ERROR] celery.worker.job: Task sentry.tasks.assemble.assemble_dif[01205ec8-fb54-4cc0-ae48-ce75bb96f880] raised unexpected: NoSuchKey(u'An error occurred (NoSuchKey) when calling the GetObject operation: Unknown',) (data={u'hostname': 'celery#sentry-worker-42-mw42p', u'name': 'sentry.tasks.assemble.assemble_dif', u'args': '[]', u'internal': False, u'kwargs': "{'chunks': ['7f91f5edfe5ce6650448c3edf6cdea6bed5a3699'], 'checksum': '7f91f5edfe5ce6650448c3edf6cdea6bed5a3699', 'project_id': 7L, 'name': 'libswiftos.dylib'}", u'id': '01205ec8-fb54-4cc0-ae48-ce75bb96f880'})
I have checked my pods that the target S3 bucket is accessible. Could someone please help us to resolve this issue?

It seems you don't set you aws keys for sentry cli or you are requesting to S3 with wrong keys. If you didn't configure sentry-cli for aws keys maybe this can help you: https://github.com/getsentry/sentry-docs/pull/956/files?short_path=b1d11e7#diff-b1d11e7d8a13bff13c9b3012f58e0b71

Related

Reading files from S3 to kafka topic

I have a situation wherein all the event data is getting stored in an s3 bucket and I need to fetch that from S3 to Kafka topic on ec2. I am using CamelAWSS3Connector and am facing issues of the connector not working.
Following is the error I am facing
[2023-01-06 10:11:21,048] ERROR Failed to create job for config/s3_connect.properties (org.apache.kafka.connect.cli.ConnectStandalone:107)
[2023-01-06 10:11:21,053] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117)
java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: org/jctools/queues/MessagePassingQueue$Supplier
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:115)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:99)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:114)
Caused by: java.lang.NoClassDefFoundError: org/jctools/queues/MessagePassingQueue$Supplier
I was expecting the publisher to push msg to topic from s3 to kafka
Following is my properties files
name=CamelAwss3SourceConnector
connector.class=org.apache.camel.kafkaconnector.aws2s3.CamelAws2s3SourceConnector
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
camel.source.maxPollDuration=10000
topics=mytopic
camel.component.aws-s3.access-key=XXXXXXXX
camel.component.aws-s3.region=ap-south-1
camel.source.path.bucketNameOrArn=poc-s3-kafkatopic
camel.source.endpoint.autocloseBody=true
camel.source.endpoint.deleteAfterRead=true
After using export command and adding jars location before calling the publisher following is the error
[2023-01-11 06:43:05,528] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:117) java.util.concurrent.ExecutionException: java.lang.NoClassDefFoundError: org/apache/camel/kafkaconnector/CamelSourceConnectorConfig
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:115)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:99)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:114) Caused by: java.lang.NoClassDefFoundError: org/apache/camel/kafkaconnector/CamelSourceConnectorConfig
Make sure you have added plugin.path=/path/to/extracted-camel-connector to the connect-standalone.properties file.
And if that doesn't work, you'll need to export CLASSPATH environment variable to include the jar files in that path.

<Seahorse::Client::NetworkingError: execution expired> AWS CodeDeploy

I am Facing a issue in Deployment stage at CodeDeploy in AWS Codepipeline,
I have done all configuration, the agent is running, I have assigned the IAM role also to the instance but after that also i am getting Error while Deploying.
Please Check the Error:
2022-08-29 19:52:01 ERROR [codedeploy-agent(2529)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Network error: #<Seahorse::Client::NetworkingError: execution expired>
2022-08-29 19:52:29 INFO [codedeploy-agent(2529)]: Version file found in /opt/codedeploy-agent/.version with agent version OFFICIAL_1.3.2-1902_rpm.
2022-08-29 19:53:31 INFO [codedeploy-agent(2529)]: [Aws::CodeDeployCommand::Client 0 62.10601 3 retries] poll_host_command(host_identifier:"arn:aws:ec2:ap-south-1:068066723617:instance/i-05db00vhma7aa5a2") Seahorse::Client::NetworkingError execution expired

Internal server error when trying to build and deploy to AEM

My local AEM instance suddenly stopped working when I switched branches in git and installed the package with
mvn clean install -PautoInstallPackage.
The build failed with [ERROR] Request to http://localhost:4502/crx/packmgr/service.jsp failed, response=Internal Server Error
My error log shows the following errors:
25.10.2018 11:52:33.607 *ERROR* [127.0.0.1 [1540504353564] POST /crx/packmgr/service.jsp HTTP/1.1] org.apache.jackrabbit.oak.spi.security.authentication.external.impl.ExternalLoginModule No IDP found with name 0654f74c177ec80b60f7922a9a6195cf. Will not be used for login.
25.10.2018 11:52:33.607 *ERROR* [127.0.0.1 [1540504353564] POST /crx/packmgr/service.jsp HTTP/1.1] org.apache.jackrabbit.oak.spi.security.authentication.external.impl.ExternalLoginModule No IDP found with name a9dea3b044e912071cbffd4839016d2e. Will not be used for login.
25.10.2018 12:00:30.005 *INFO* [sling-default-2-Registered Service.1079] com.adobe.granite.taskmanagement.impl.jcr.TaskArchiveService archiving tasks at: 'Thu Oct 25 12:00:30 HST 2018'
25.10.2018 12:00:58.610 *ERROR* [127.0.0.1 [1540504858546] POST /crx/packmgr/service.jsp HTTP/1.1] org.apache.sling.engine.impl.SlingRequestProcessorImpl service: Uncaught SlingException
java.io.IOException: Unable to get component of class 'interface org.apache.sling.rewriter.Generator' with type 'htmlparser'.
I've tried adding <useProxy>false</useProxy> to my parent POM file as suggested on a similar thread posted here but that also did not work. I've tried recloning the repo and starting over but since it's a server error that did nothing.
Additional Info:
Running on Windows 10
AEM 6.4
Any assistance will be greatly appreciated.
Thanks!
Thanks for your help. The issue actually happened when I switched branches and installed and deployed the package over the existing package in AEM. By removing the quickstart folder and restarting the JAR file, thus creating a new AEM directory, I was able to install and deploy the correct branch without error.

FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster java.lang.NoSuchMethodError:

I am trying to run a mapreduce job on EMR cluster. The version of Hadoop on EMR is 2.7.3.
The code is used to read HFiles residing on S3 bucket. But every time I run it fails with the below error.
2018-02-22 20:02:11,641 FATAL [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
java.lang.NoSuchMethodError: org.apache.hadoop.mapred.TaskLog.createLogSyncer()Ljava/util/concurrent/ScheduledExecutorService;
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.<init>(MRAppMaster.java:250)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.<init>(MRAppMaster.java:233)
at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1472)
2018-02-22 20:02:12,188 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting with status 1
End of LogType:syslog
The actual code was designed to read files from HDFS which was all doing fine in CDH based clusters where the hadoop version is 2.6.0. However there was a requirement to read the HFiles from S3 bucket on EMR based cluster in AWS. I made few changes in the code which will allow it to read any file system. Below is the snippet of the change
...
Path JSONOutputjob2 = new Path( args[1] );
FileSystem.get(JSONOutputjob2.toUri(), conf2).delete(JSONOutputjob2, true);
...
I am passing the path as an argument and here are the options that I have tried with the file path.
s3n://emr-ip/path/to/the/file
s3a://emr-ip/path/to/the/file
s3://emr-ip/path/to/the/file
This error is really driving me crazy. I have updated my pom.xml file to use the available Hadoop version of the cluster and built the project. The build was also successful. But does not work. Any suggestions or help is much appreciated.
Edit:
I have update my pom to have the aws hadoop version i.e 2.7.3 which did not fix the issue.

Getting NullPointerException when using a S3 job file with Samza

I'm getting the following exception when passing a S3 file path to the yarn.package.path.
Exception in thread "main" java.lang.NullPointerException
at org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:433)
at org.apache.samza.job.yarn.ClientHelper.submitApplication(ClientHelper.scala:111)
at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:54)
at org.apache.samza.job.yarn.YarnJob.submit(YarnJob.scala:47)
at org.apache.samza.job.JobRunner.run(JobRunner.scala:62)
at org.apache.samza.job.JobRunner$.main(JobRunner.scala:37)
I'm able to curl the s3 file from the same box (after exporting the AWS environment variables).
This is how package path is set in my job properties file:
yarn.package.path=s3n://{ACCESS_KEY}:{SECRET_KEY}#bucketname/path1/path2/tar.gz