How can I automatically upload the VOD file to my s3 bucket that I am uploading to Ant media server? - amazon-s3

When s3 integration is enabled on Ant media server, recorded VODs are uploading to s3 bucket but when we are uploading any VOD file on server it is not uploading on s3 bucket. Is there any way to do that on the server side?

You can automatically transfer the VoD files with this script to the S3 bucket by following the steps below.
1. Install FFmpeg
apt-get update && apt-get install ffmpeg -y
2. Save the script under /usr/local/antmedia/ and change permission by using chmod command it. (Don't forget to add AWS Access/Secret keys)
You can download the script from the following link or you can find it at the bottom of the page.
https://github.com/muratugureminoglu/Scripts/blob/master/vod-upload-s3.sh
chmod +x /usr/local/antmedia/vod-upload-s3.sh
3. Modify the red5-web.properties file in your webapps as follows.
vim [AMS-DIR]/webapps/your_application/WEB-INF/red5-web.properties
Add or change the following line.
settings.vodUploadFinishScript=/usr/local/antmedia/vod-upload-s3.sh
4. Restart Ant Media Server service.
systemctl restart antmedia
5. Follow the link below to play the uploaded VoD files.
VOD not playing after s3 recording enabled in Ant Media server
That's it.
Script
#!/bin/bash
#
# Installation Instructions
#
# apt-get update && apt-get install ffmpeg -y
# vim [AMS-DIR]/webapps/applications(LiveApp or etc.)/WEB-INF/red5-web.properties
# settings.vodUploadFinishScript=/Script-DIR/vod-upload-s3.sh
# sudo service antmedia restart
#
# Check if AWS CLI is installed
if [ -z `which aws` ]; then
rm -r aws* > /dev/null 2>&1
echo "Please wait. AWS Client is installing..."
curl "https://d1vvhvl2y92vvt.cloudfront.net/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" > /dev/null 2>&1
unzip awscliv2.zip > /dev/null 2>&1
sudo ./aws/install &
wait $!
echo "AWS Client installed."
rm -r aws*
fi
# Delete the uploaded VoD file from local disk
DELETE_LOCAL_FILE="Y"
AWS_ACCESS_KEY=""
AWS_SECRET_KEY=""
AWS_REGION=""
AWS_BUCKET_NAME=""
#AWS Configuration
aws configure set aws_access_key_id $AWS_ACCESS_KEY
aws configure set aws_secret_access_key $AWS_SECRET_KEY
aws configure set region $AWS_REGION
aws configure set output json
tmpfile=$1
mv $tmpfile ${tmpfile%.*}.mp4"_tmp"
ffmpeg -i ${tmpfile%.*}.mp4"_tmp" -c copy -map 0 -movflags +faststart $tmpfile
rm ${tmpfile%.*}.mp4"_tmp"
aws s3 cp $tmpfile s3://$AWS_BUCKET_NAME/streams/ --acl public-read
if [ $? != 0 ]; then
logger "$tmpfile failed to copy file to S3. "
else
# Delete the uploaded file
if [ "$DELETE_LOCAL_FILE" == "Y" ]; then
aws s3api head-object --bucket $AWS_BUCKET_NAME --key streams/$(basename $tmpfile)
if [ $? == 0 ];then
rm -rf $tmpfile
logger "$tmpfile is deleted."
fi
fi
fi

Related

Using the `s3fs` python library with Task IAM role credentials on AWS Batch

I'm trying to get an ML job to run on AWS Batch. The job runs in a docker container, using credentials generated for a Task IAM Role.
I use DVC to manage the large data files needed for the task, which are hosted in an S3 repository. However, when the task tries to pull the data files, it gets an access denied message.
I can verify that the role has permissions to the bucket, because I can access the exact same files if I run an aws s3 cp command (as shown in the example below). But, I need to do it through DVC so that it downloads the right version of each file and puts it in the expected place.
I've been able to trace down the problem to s3fs, which is used by DVC to integrate with S3. As I demonstrate in the example below, it gets an access denied message even when I use s3fs by itself, passing in the credentials explicitly. It seems to fail on this line, where it tries to list the contents of the file after failing to find the object via a head_object call.
I suspect there may be a bug in s3fs, or in the particular combination of boto, http, and s3 libraries. Can anyone help me figure out how to fix this?
Here is a minimal reproducible example:
Shell script for the job:
#!/bin/bash
AWS_CREDENTIALS=$(curl http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID=$(echo "$AWS_CREDENTIALS" | jq .AccessKeyId -r)
export AWS_SECRET_ACCESS_KEY=$(echo "$AWS_CREDENTIALS" | jq .SecretAccessKey -r)
export AWS_SESSION_TOKEN=$(echo "$AWS_CREDENTIALS" | jq .Token -r)
echo "AWS_ACCESS_KEY_ID=<$AWS_ACCESS_KEY_ID>"
echo "AWS_SECRET_ACCESS_KEY=<$(cat <(echo "$AWS_SECRET_ACCESS_KEY" | head -c 6) <(echo -n "...") <(echo "$AWS_SECRET_ACCESS_KEY" | tail -c 6))>"
echo "AWS_SESSION_TOKEN=<$(cat <(echo "$AWS_SESSION_TOKEN" | head -c 6) <(echo -n "...") <(echo "$AWS_SESSION_TOKEN" | tail -c 6))>"
dvc doctor
# Succeeds!
aws s3 ls s3://company-dvc/repo/
# Succeeds!
aws s3 cp s3://company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2 mycopy.txt
# Fails!
python3 download_via_s3fs.py
download_via_s3fs.py:
import os
import s3fs
# Just to make sure we're reading the credentials correctly.
print(os.environ["AWS_ACCESS_KEY_ID"])
print(os.environ["AWS_SECRET_ACCESS_KEY"])
print(os.environ["AWS_SESSION_TOKEN"])
print("running with credentials")
fs = s3fs.S3FileSystem(
key=os.environ["AWS_ACCESS_KEY_ID"],
secret=os.environ["AWS_SECRET_ACCESS_KEY"],
token=os.environ["AWS_SESSION_TOKEN"],
client_kwargs={"region_name": "us-east-1"}
)
# Fails with "access denied" on ListObjectV2
print(fs.exists("company-dvc/repo/00/0e4343c163bd70df0a6f9d81e1b4d2"))
Terraform for IAM role:
data "aws_iam_policy_document" "standard-batch-job-role" {
# S3 read access to related buckets
statement {
actions = [
"s3:Get*",
"s3:List*",
]
resources = [
data.aws_s3_bucket.company-dvc.arn,
"${data.aws_s3_bucket.company-dvc.arn}/*",
]
effect = "Allow"
}
}
Environment
OS: Ubuntu 20.04
Python: 3.10
s3fs: 2023.1.0
boto3: 1.24.59

Travis AWS S3 SDK set cache header for particular file

In my Travis script is there a way when uploading contents to S3 Bucket as follows :
# deploy:
# provider: script
# skip_cleanup: true
# script: "~/.local/bin/aws s3 sync dist s3://mybucket --region=eu-west-1
# --delete"
# before_deploy:
# - npm run build
# - pip install --user awscli
I also want to set a no cache header on a particular file in that bucket (i.e. sw.js). Is that currently possible in the SDK ?
I am afraid that this is not possible using a single s3 sync command. But you may try to execute two commands using exclude and include options. One to sync all except the sw.js and the other one just for sw.js.
script: ~/.local/bin/aws s3 sync dist s3://mybucket --include "*" --exclude "sw.js" --region eu-west-1 --delete ; ~/.local/bin/aws s3 sync dist s3://mybucket --exclude "*" --include "sw.js" --region eu-west-1 --delete --cache-control "no-cache" --metadata-directive REPLACE
Note: --metadata-directive REPLACE option is necessary for non-multipart copies.

I would like to set up rfc5766-turn-server in Ubuntu 14.04, can anyone give me the set of steps listed all together ? I am doing it in AWS EC2

I have tried to install and set up rfc5766-turn-server in AWS EC2 but unable to do it as I do not see a proper flow of work or command line for that, can someone help me about this ? I need to set it up in Ubuntu 14.04
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
commands for installing turnserver:
sudo apt-get update
sudo apt-get install make gcc libssl-dev libevent-dev wget -y # for installing modules required by turn server
mkdir ~/turn && cd ~/turn # creating temp directory
wget turnserver.open-sys.org/downloads/v3.2.5.9/turnserver-3.2.5.9.tar.gz # downloading the TURN source code
tar -zxvf *.gz # extract
cd turn*
make
sudo make install # installing the rfc5766
cd ../.. && rm -rf turn # cleaning up
command for starting the TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP
assumptions:
your ip, internal ip = EXT_IP, INT_IP
desired port for listening: 3478
single credential username:password = user:root
realm: someRealm
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}

Transporting with data from S3 amazon to local server

I am trying to import data from S3 and using the described below script (which I sort of inherited). It's a bit long...The problem is I kept receiving following output:
The config profile (importer) could not be found
I am not a bash person-so be gentle, please. It seemed there are some credentials missing or something else is wrong with configuration of "importer" on local machine.
In S3 configs(the console) - there is a user with the same name, which, according to permissions can perform access the bucket and download data.
I have tried changing access keys in amazon console for the user and creating file, named "credentials" in home/.aws(there was no .aws folder in home dir by default-created it), including the new keys in the file, tried upgrading AWS CLI with pip - nothing helped
Then I have modified the "credentials", placing [importer] as profile name, so it looked like:
[importer]
aws_access_key = xxxxxxxxxxxxxxxxx
aws_secret+key = xxxxxxxxxxxxxxxxxxx
Appears, that I have gone through the "miss-configuration":
A client error (InvalidAccessKeyId) occurred when calling the ListObjects operation: The AWS Access Key Id you provided does not exist in our records.
Completed 1 part(s) with ... file(s) remaining
And here's the part, where I am stuck...I placed the keys, I have obtained from the amazon into that config file. Double checked...Any suggestions? I can't produce anymore keys-aws quota/user. Below is part of the script:
#!/bin/sh
echo "\n$0 started at: `date`"
incomming='/Database/incomming'
IFS='
';
mkdir -p ${incomming}
echo "syncing files from arrivals bucket to ${incomming} incomming folder"
echo aws --profile importer \
s3 --region eu-west-1 sync s3://path-to-s3-folder ${incomming}
aws --profile importer \
s3 --region eu-west-1 sync s3://path-to-s3-folder ${incomming}
count=0
echo ""
echo "Searching for zip files in ${incomming} folder"
for f in `find ${incomming} -name '*.zip'`;
do
echo "\n${count}: ${f} --------------------"
count=$((count+1))
name=`basename "$f" | cut -d'.' -f1`
dir=`dirname "$f"`
if [ -d "${dir}/${name}" ]; then
echo "\tWarning: directory "${dir}/${name}" already exist for file: ${f} ... - skipping - not imported"
continue
fi

s3cmd: backup folder to Amazon S3

I'm using s3cmd to backup my databases to Amazon S3, but I'd also like to backup a certain folder and archive it.
I have this part from this script that successfully backups the databases to S3:
# Loop the databases
for db in $databases; do
# Define our filenames
filename="$stamp - $db.sql.gz"
tmpfile="/tmp/$filename"
object="$bucket/$stamp/$filename"
# Feedback
echo -e "\e[1;34m$db\e[00m"
# Dump and zip
echo -e " creating \e[0;35m$tmpfile\e[00m"
mysqldump -u root -p$mysqlpass --force --opt --databases "$db" | gzip -c > "$tmpfile"
# Upload
echo -e " uploading..."
s3cmd put "$tmpfile" "$object"
# Delete
rm -f "$tmpfile"
done;
How can I add another section to archive a certain folder, upload to S3 and then delete the local archive?
Untested and basic but this should get the job done with some minor tweaks
# change to tmp dir - creating archives with absolute paths can be dangerous
cd /tmp
# create archive with timestamp of dir /path/to/directory/to/archive
tar -czf "$stamp-archivename.tar.gz" /path/to/directory/to/archive
# upload archive to s3 bucket 'BucketName'
s3cmd put "/tmp/$stamp-archivename.tar.gz" s3://BucketName/
# remove local archive
rm -f "/tmp/$stamp-archivename.tar.gz"