i'm new to redis and learning how to use it. I have and exercise require create a shopping cart redis db using client id as key and store image using commands. I've research on redis commands site and others but only find ways to do the task using python or nodejs. Can any one show me some example doing it with commands?. Thanks
cat spam_comments.png | redis-cli -x set spam_comments.png
redis-cli get spam_comments.png > /tmp/img.png
$ redis-cli get spam_comments.png > /tmp/img.png
$ file img.png
img.png: PNG image data, 392 x 66, 8-bit/color RGB, non-interlaced
Related
I follow instruction from this link to create an image container in Azure Container registry whenever a commit is done.
Everything is working fine and I appreciate we could retrieve the Run.ID
az acr task create -t acb:{{.Run.ID}} -n acb-win -r MyRegistry \
-c https://github.com/Azure/acr-builder.git -f Windows.Dockerfile \
--commit-trigger-enabled false --platform Windows/amd64
I see also that we can use another tag like {{.Run.Registry}} instead of {{.Run.ID}}.
I am curious to know which other tag exists. In my workflow i wonder if it possible to retrieve the commitID.
Is there anyone who succeed to retrieve the commitID ? I tester several combinaison but no luck.
Many thanks to the community.
I answer to myself and found this link https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/container-registry/container-registry-tasks-reference-yaml.md#task-step-properties which explain clairly all variable can be used.
So I have Google Cloud Storage Bucket which follows this style of directory:
gs://mybucket/{year}/{month}/{day}/{a csv file here}
The csv files all follow the same schema so it shouldn't be an issue. I was wondering if there was a easier method of loading all the files in 1 command or even a cloud function into 1 table in BigQuery. I've been using bl load to accomplish this for now, but seeing that I have to do this about every week, I'd like to get some automation for it.
Inspired from this answer
You can recursively load your files with the following command:
gsutil ls gs://mybucket/**.csv | \
xargs -I{} echo {} | \
awk -F'[/.]' '{print "yourdataset."$7"_"$4"_"$5"_"$6" "$0}' | \
xargs -I{} sh -c 'bq --location=YOUR_LOCATION load --replace=false --autodetect --source_format=CSV {}'
This loads your CSV files into independant tables in your target dataset, with the naming convention "filename_year_month_day".
The "recursively" part is ensured by the double wildcard (**).
This is for the manual part..
For the automation part you have the choice between different options:
the easiest one is probably to associate a Cloud Function that you trigger with Cloud Scheduler. There is no bash runtime available so you would for instance have to Python your way through. Here is what a quick Google search gave me.
it is possible to do that with an orchestrator (Cloud Composer) if you have the infrastructure (if you don't, don't consider)
another solution is to use Cloud Run, triggered either by Cloud Scheduler (on regular occasions then), or through Eventarc triggers when your csv files are uploaded to GCS.
Setting: lots of mp3 records of customer support conversations somewhere in a db. Each mp3 record has 2 channels, one is customer rep, another is customer's voice.
I need to extract embedding(tensor) of a customer's voice. It's a 3 step process:
get the channel, cut 10 secs, convert to embedding. I have all 3 functions for each step.
embedding is a vector tensor:
"tensor([[0.6540e+00, 0.8760e+00, 0.898e+00,
0.8789e+00, 0.1000e+00, 5.3733e+00]])
Tested with postman. Get embedding function:
I want to build a rest api that connects on 1 endpoint to the db of mp3 files and outputs embedding to another db.
I need to clarify important feature about docker.
When i run "python server.py" flask makes it available on my local pc - 127.0.1.01/9090:
def get_embedding(file):
#some code
#app.route('/health')
def check():
return jsonify({'response':'OK!'})
#app.route('/get_embedding')
def show_embedding():
return get_embedding(file1)
if __name__ == '__main__':
app.run(debug=True, port=9090)
when i do it with docker - where goes the server and files? where does it become available online, can docker upload all the files to default docker cloud?
You need to write a Dockerfile to build your Docker image and after that, Run a container from that image exposing on the port and then you can access it machineIP:PORT
Below is the example
Dockerfile
#FROM tells Docker which image you base your image on (in the example, Python 3).
FROM python:3
#WORKDIR tells which directory container has to word
WORKDIR /usr/app
# COPY files from your host to the image working directory
COPY my_script.py .
#RUN tells Docker which additional commands to execute.
RUN pip install pystrich
CMD [ "python", "./my_script.py" ]
Ref:- https://docs.docker.com/engine/reference/builder/
And then build the image,
docker build -t server .
Ref:- https://docs.docker.com/engine/reference/commandline/build/
Once, Image is built start a container and expose the port through which you can access your application.
E.g.
docker run -p 9090:9090 server
-p Publish a container's port(s) to the host
And access your application on localhost:9090 or 127.0.0.1:9090 or machineIP:ExposePort
I just found my box has 5% for HDD hard drive left and I have like almost 250GB of mysql bin file that I want to send to s3. We have moved from mysql to NoSQL and not currently using mysql. However I would love to preserve old data before migration.
Problem is I can't just tar the files in a loop before sending them there. So I was thinking I could gzip on the fly before sending so it doesn't store the compressed file on HDD.
for i in * ; do cat i | gzip -9c | s3cmd put - s3://mybudcket/mybackups/$i.gz; done
To test this command, I run it without the loop and it didn't send anything but didn't complain about anything either. Is there anyway of achieving that?
OS is ubuntu 12.04
s3cmd version is 1.0.0
Thank you for your suggestions.
Alternatively you can use https://github.com/minio/mc . Minio Client aka mc is written in Golang, released under Apache License Version 2.
It implements mc pipe command for users to stream data directly to Amazon S3. mc pipe can also pipe to multiple destinations in parallel. Internally mc pig streams the output and does multipart upload in parallel.
$ mc pipe
NAME:
mc pipe - Write contents of stdin to files. Pipe is the opposite of cat command.
USAGE:
mc pipe TARGET [TARGET...]
Example
#!/bin/bash
for i in *; do
mc cat $i | gzip -9c | mc pipe https://s3.amazonaws.com/mybudcket/mybackups/$i.gz
done
If you can see mc also implements mc cat command :-).
The function to allow stdin to S3 was added to Master branch in February 2014, so I guess make sure your version is newer than that? Version 1.0.0 is from 2011 and previous, the current (at time of this writing) is 1.5.2. It's likely you need to update your version of s3cmd
Other than that, according to https://github.com/s3tools/s3cmd/issues/270 this should work, save that your "do cat i" is missing the $ sign to indicate it as a variable.
I've splitted big binary file to (2Gb) chunks and uploaded it to Amazon S3.
Now I want to join it back to one file and process with my custom
I've tried to run
elastic-mapreduce -j $JOBID -ssh \
"hadoop dfs -cat s3n://bucket/dir/in/* > s3n://bucket/dir/outfile"
but it failed due to -cat output data to my local terminal - it does not work remotely...
How I can do this?
P.S. I've tried to run cat as a streaming MR job:
den#aws:~$ elastic-mapreduce --create --stream --input s3n://bucket/dir/in \
--output s3n://bucket/dir/out --mapper /bin/cat --reducer NONE
this job was finished successfully. But. I had 3 file parts in dir/in - now I have 6 parts in /dir/out
part-0000
part-0001
part-0002
part-0003
part-0004
part-0005
And file _SUCCESS ofcource which is not part of my output...
So. How to join splitted before file?
So. I've found a solution. Maybe not better - but it is working.
So. I've created an EMR job flow with bootstrap action
--bootstrap-action joinfiles.sh
in that joinfiles.sh I'm downloading my file pieces from S3 using wget, join them using regular cat a b c > abc.
After that I've added a s3distcp which copied result back to S3. ( sample could be found at: https://stackoverflow.com/a/12302277/658346 ).
That is all.