Tensorflow serving from s3 path is not working - Could not find base path s3://bucketname - tensorflow

Loading model from s3 is not working for me. As per #615 I compiled the server using:
bazel build -s -c opt --define with_s3_support=true //tensorflow_serving/model_servers:tensorflow_model_server
and now when I run it using
bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --model_base_path=s3://bucketname/
I'm getting
FileSystemStoragePathSource encountered a file-system access error: Could not find base path s3://bucketname/ for servable default
Any tips on how to fix that?

s3://bucketname/ isn't resolvable unless you have the AWS SDK installed on that machine.
A much better approach would be to use the URL of the model on S3. If you're going to do that, you have to either make the bucket public, in which everyone will be able to access it, or you have to create a bucket policy which allows access from the server's IP.
If you're hosting your server on AWS, you can also launch it with an IAM role and give it S3FullAccess. This is best for any sort of production environment because you don't have to store API Keys in your source code.

Related

MLflow artifacts on S3 but not in UI

I'm running mlflow on my local machine and logging everything through a remote tracking server with my artifacts going to an S3 bucket. I've confirmed that they are present in S3 after a run but when I look at the UI the artifacts section is completely blank. There's no error, just empty space.
Any idea why this is? I've included a picture from the UI.
You should see the 500 response in your artifacts request to the MLflow tracking server e.g. by clicking on the model of interest page (in the browser console). The UI service wouldnt know the location (since you set that to be an S3 bucket) and tries to load the defaults.
You need to specify the --artifacts-destination s3://yourPathToArtifacts argument to your mlflow server command. Also, when running the server in your environment dont forget to supply some common AWS credentials provider(s) (such as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env variables) as well as the MLFLOW_S3_ENDPOINT_URL env variable to point to your S3 endpoint.
I had the same issue with mlflow running on an ec2 instance. I logged into the server and noticed that it was overloaded and no disk space was left. I deleted a few temp files and the mlflow UI started displaying the files again. It seems like mlflow stores tons of tmp files, but that is a separate issue.

Artifactory Automatic Migration to S3 from local Storage does not happen

As per the official documentation, I have created the Eventual directory and Symbolic links(_add pointing to filestore & _pre pointing to _pre) within it. The automatic migration does not happen. I am using docker container of artifactory pro version 6.23.13 . I have waited overnight for the migration to happen but it didnt. Also the artifactory was serving only 4 artifacts.
Answering to my own question, I had initially created the eventual directory and links in the path /var/opt/jfrog/artifactory which is the home for docker container. Seems like there is another path that exists within the container /opt/jfrog/artifactory and creating the directory and links in that path seemed to do the trick

How to download file from S3 into EC2 instance using packers to build custom AMI

I am trying to create a custom AMI using packers.
I want to install some specific software on the custom AMI and my setups are present in S3 bucket. But it seems there is no direct way to download S3 file in packers just like cfn-init.
So is there any way to download file on EC2 instance using packers.
Install the awscli in the instance and use iam_instance_profile to give the instance permissions to get the files from S3.
I can envisage an instance where this is ineffective.
When building the image upon aws you use your local creds. Whilst the image is building this building packer image has a packer user and is not you and so not your creds and can't access the S3 (if private)
One option https://github.com/enmand/packer-provisioner-s3
Two option, use local-shell provisioner you pull down the S3 files to your machine using aws S3 cp, then file provisioner to upload to the correct folder in the builder image, you can then use remote-shell to do any other work on the files. I chose this as, although it's more code, it is more universal when I share my build, other have no need to install other stuff
Three option wait and wait. There is an enhancement spoke of in 2019 packer GitHub to offer an S3 passthrough using local cars but isn't on the official roadmap.
Assuming awscli is already installed on Ec2, use below sample commmand in a shell provisioner.
sudo aws s3 cp s3://bucket-name/path_to_folder/file_name /home/ec2-user/temp

How to use Amazon S3 as Moodle Data Root

I am trying to move my moodledata folder content into Amazon S3. i didnt found any document (or guide) to configure this setup.
I am using MOODLE 3.3 STABLE build version.
Can anyone help me to setup this?
You could use s3fs and mount it on your webserver.
I suggest to use local directory (for performance) to:
cache, localcache and sessions

Asp.net core 2.0 site running on LightSail Ubuntu can't access AWS S3

I have a website that I've build with Asp.net core 2.0. The website gets a list of files sitting in my ASW S3 bucket and displays them as links for authenticated users. When I run the website locally I have no issues and am able to access S3 to generate pre-signed urls. When I deploy the web app to LightSail ubuntu I get this incredibly useful error message : AmazonS3Exception: Access Denied.
At first I thought is was a region issues. I changed my S3 buckets to use the same region as my Lightsail ubuntu instance (East #2). Then I thought it my be a CORS issues and made sure that my buckets allowed CORS.
I'm kinda stuck at the moment.
I had exactly the same issue, then i solved it by creating an environment variable. To add environment variable permanently in Ubuntu open the environment file by the following command
sudo vi /etc/environment
then add your credential like this
AWS_ACCESS_KEY_ID=YOUR_KEY_ID
AWS_SECRET_ACCESS_KEY=YOUR_SECRECT_ACCESS_KEY
Then save the environment file, and Restart your asp core app.