Updating my question : My test scenario is to get the file sizes in a particular S3 bucket. For this I have installed robotframework-aws library. Now I am not sure which keyword to use in order to get the file sizes. Here is the code i have written till now :
Run AWS CLI Command to capture file size of S3 MRL Hub Source
Create Session With Keys region: us-east-1 access_key: xxxx secret_key: xxxx aws_session_token: str
Read File From S3 bucket_name: com-abc-def-ghi key: name/of/the/file/i/am/looking/for.parquet
With this code i am getting the following error :
InvalidRegionError: Provided region_name 'region: us-east-1' doesn't match a supported format.
You can use Run command, which is part of the OperatingSystem Library and it is already included by default when you install RobotFramework.
With this you can make Robotframework run any command prompt that you which. For example:
*** Settings ***
Library OperatingSystem
*** Test Cases ***
Test run command
${output}= Run aws --version
log ${output}
The problem comes when you want to use the interactive capabilities of aws configure. I mean, you can't expect that Robotframework test case asks for your input. In this case you need to provide all aws configure options before hand. This means that you need to prepare a profile file for your test case and then you can concatenate more commands, like:
*** Test Cases ***
Test run command
${output}= Run aws configure --profile <profilename> && set https_proxy http://webproxy.xyz.com:8080
log ${output}
Or better use a profile file directly with s3, like, aws s3 ls --profile <profilename>.
Bear in mind that the best way to do this is using some kind of external library, like AWSLibrary or create your own custom library using boto3 Python library.
Related
I am trying to create a deployment pipeline for Gitlab-CI on a react project. The build is working fine and I use artifacts to store the dist folder from my yarn build command. This is working fine as well.
The issue is regarding my deployment with command: aws s3 sync dist/'bucket-name'.
Expected: "Done in x seconds"
Actual:
error Command failed with exit code 2. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. Running after_script 00:01 Uploading artifacts for failed job 00:01 ERROR: Job failed: exit code 1
The files seem to have been uploaded correctly to the S3 bucket, however I do not know why I get an error on the deployment job.
When I run the aws s3 sync dist/'bucket-name' locally everything works correctly.
Check out AWS CLI Return Codes
2 -- The meaning of this return code depends on the command being run.
The primary meaning is that the command entered on the command line failed to be parsed. Parsing failures can be caused by, but are not limited to, missing any required subcommands or arguments or using any unknown commands or arguments. Note that this return code meaning is applicable to all CLI commands.
The other meaning is only applicable to s3 commands. It can mean at least one or more files marked for transfer were skipped during the transfer process. However, all other files marked for transfer were successfully transferred. Files that are skipped during the transfer process include: files that do not exist, files that are character special devices, block special device, FIFO's, or sockets, and files that the user cannot read from.
The second paragraph might explain what's happening.
There is no yarn build command. See https://classic.yarnpkg.com/en/docs/cli/run
As Anton mentioned, the second paragraph of his answer was the problem. The solution to the problem was removing special characters from a couple SVGs. I suspect uploading the dist folder as an artifact(zip) might have changed some of the file names altogether which was confusing to S3. By removing ® and + from the filename the issue was resolved.
I have an ECS task that runs some test cases. I have it running in Fargate. Yay!
Now I want to download the test results file(s) from the container. I have the task and container IDs handy. I can find the exit code with
aws ecs describe-tasks --cluster Fargate --tasks <my-task-id>
How do I download the log and/or files produced?
It looks like, as of right now, the only way to get test results off of my server is to send the results to S3 before the container shuts down.
From this thread, there's no way to mount a volume / EFS onto a Fargate container.
Here's my bash script for running my tests (in build.sh) and then uploading the results to S3:
#!/bin/bash
echo Running tests...
pushd ~circleci/project/
export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY
export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_KEY
commandToRun="~/project/.circleci/build_scripts/build.sh";
# Run the command
eval $commandToRun 2>&1 | tee /tmp/build.log
# Get the exit code
exitCode=$?
aws s3 cp /tmp/build-$FEATURE.log s3://$CICD_BUCKET/build.log \
--storage-class REDUCED_REDUNDANCY \
--region us-east-1
exit ${exitCode}
Of course, you'll have to pass in the AWS_ACCESS_KEY, AWS_SECRET_KEY and CICD_BUCKET environment variables. The bucket name you choose needs to be pre-created, but any directory structure below it does NOT need to be created in advance.
You probably want to look at using CodeBuild for this use case, which can automatically copy artifacts to S3.
It's actually quite easy to orchestrate the following using a simple bash script and the AWS CLI:
Idempotently Create/Update a CodeBuild project (using a simple CloudFormation template you can define in your source repository)
Run a Codebuild job that executes a given revision of your source repository (using again a self-defining buildspec.yml specification defined in your source repository)
Attach to the CloudWatch logs log group for your CodeBuild job and stream log output
Finally detect when the job has completed successfully or not and then download any artifacts locally using S3
I use this approach to run builds in CodeBuild, with Bamboo as the overarching continuous delivery system.
I am trying to get the bq CLI to work with multiple service accounts for different projects without having to re-authenticate using gcloud auth login or bq init.
An example of what I want to do, and am able to do using gsutil:
I have used gsutil with a .boto configuration file containing:
[Credentials]
gs_service_key_file = /path/to/key_file.json
[Boto]
https_validate_certificates = True
[GSUtil]
content_language = en
default_api_version = 2
default_project_id = my-project-id
[OAuth2]
on a GCE instance to run an arbitrary gsutil command as a service. The service does not need to be unique or globally defined on the GCE instance: as long as a service is set up in my-project-id and a private key has been created, then the private key file referenced in the .boto config will take care of authentication. For example, if I run
BOTO_CONFIG=/path/to/my/.boto_project_1
export BOTO_CONFIG
gsutil -m cp gs://mybucket/myobject .
I can copy from any project that I have a service account set up with, and for which I have the private key file defined in .boto_project_1. In this way, I can run a similar gsutil command for project_2 just be referencing the .boto_project_2 config file. No manual authentication needed.
The case with bq CLI
In the case of the bigquery command line interpreter, I want to reference a config file or pass a config option like a key file to run a bq load command, ie. upload the same .csv file that is in GCS for various projects. I want to automate this without having to bq init each time.
I have read here that you can configure a .biqqueryrc file and pass in your credential and key files as options; however the answer is from 2012, references outdated bq credential files, and throws errors due to the openssl and pyopenssl installs that it mentioned.
My question
Provide two example bq load commands with any necessary options/biqueryrc files to correctly load a .csv file from GCS into bigquery for two distinct projects without needing to bq init/authenticate manually between the two commands. Assume the .csv file is already correctly in each project's GCS bucket.
Simply use gcloud auth activate-service-account and use the global --project flag.
https://cloud.google.com/sdk/gcloud/reference/auth/activate-service-account
https://cloud.google.com/sdk/gcloud/reference/
Missing environment variable: S3_ACCESS_KEY_ID
is the error I am getting even after assigning it. I used aws configure command in which I inserted the environment variables. But while listing, I get this error. What should I do??.
COMMAND LINE::
$ export S3_ACCESS_KEY_ID=************
$ s3 list
Missing environment variable: S3_SECRET_ACCESS_KEY
The immediate problem is that the environment variable is wrong.
You set:
export AWS_ACCESS_KEY_ID=
but it is looking for S3_ACCESS_KEY_ID:
$ s3 list
Missing environment variable: S3_SECRET_ACCESS_KEY
What is possibly more interesting however, is that you did use aws configure in the first place, although this is not shown in recent edits, only in the images in original post. We would expect aws configure to correctly set the environment. And we would also expect the variables to be named AWS_* not S3_*. So why is s3 list looking for S3_*?
I can't find any reference to s3 list. Are you sure this is the correct command. Do you actually want to use something like: aws s3 ls ?
For newbie to AWS, read AWS CLI getting started documentation.
The recommended way for AWS cli is using aws configure to setup your credential and environment. If you insists to setup env variable manually, you need to make 3 export. (key shown are example shown from AWS CLI documentation)
$ export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
$ export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
$ export AWS_DEFAULT_REGION=us-west-2
I'm trying to publish an .apk into my Application Center through console. I've followed this note but it doesn't work in my environment:
https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/7.1/moving-production/distributing-mobile-applications-with-application-center/#cmdLineTools
If I type :
./acdeploytool.sh /home/miguel/Downloads/HelloWorldMyHelloAndroid.apk
I get this error message:
FWLAC0803E: Unable to connect:
Connection refused
Perhaps the server or context is wrongly specified.
File:/home/myUser/Downloads/HelloWorldMyHelloAndroid.apk
And if I try another way using this java command:
java com.ibm.appcenter.Upload -f http://localhost:9080 -c applicationcenter -u demo -p demo /home/myUser/Downloads/HelloWorldMyHelloAndroid.apk
I get this one:
Error: Could not find or load main class com.ibm.appcenter.Upload
I don't get any errors when I do this 'publish' operation directly in Application Center or through MobileFirst Studio.
Miguel, whether you use the script or the Java command, you need to specify the arguments to use. Please try the following:
./acdeploytool.sh -s http://localhost:9080 -c applicationcenter -u demo -p demo /home/miguel/Downloads/HelloWorldMyHelloAndroid.apk
I tried a similar command in my environment and was able to successfully deploy the apk to Application Center. If the command still does not work, make sure that the host/port that you are using are correct, and that the username and password are valid.
For the Java command that you executed, I see a few problems. First, the -cp argument needs to be specified in order to add the applicationcenterdeploytool.jar and json4j.jar files to the classpath. Next, the command shows "-f", but it should be "-s" to specify the server. Lastly, the path that was specified for the .apk is different than what you specified in the first command: myUser vs. miguel. So make sure that the correct path is used. If there are any further questions, let me know. Thanks.