How do you read in kaggle data? - kaggle

I am running the python code from lesson 3 of the fast ai course. However, I'm receiving a “syntax” error. Any pointers, please?
! mkdir -p ~/.kaggle/
! mv kaggle.json ~/.kaggle/
! mkdir %userprofile%.kaggle
! move kaggle.json %userprofile%.kaggle
Cheers.

Related

zsh: killed mkdir -m 700 -p "$SHELL_SESSION_DIR"

this is the message when I open my terminal. and whenever I try to write some commands many time it shows like"zhs killed" how can I solve this issue?

Why does Google colab say: chmod: cannot access 'RDP.sh': No such file or directory

When I put the following code in the Google Colab Run cell:
! wget https://raw.githubusercontent.com/alok676875/RDP/main/RDP.sh &> /dev/null
! chmod +x RDP.sh
! ./RDP.sh
The result is as follows:
chmod: cannot access 'RDP.sh': No such file or directory
/bin/bash: ./RDP.sh: No such file or directory
Please tell, where is the error and what is the solution. Thank you
this file
https://raw.githubusercontent.com/alok676875/RDP/main/RDP.sh
was deleted so you can't download it.

How to Run two benchmarks in command line in gem5?

I am running two benchmarks in the command line by the following a script for Mi-bench, kindly confirm whether I am using the right script or not- I am separating two applications through semicolon; please help.
./build/X86/gem5.opt -d ./Mi-combination ./configs/example/se.py -c "/media/shukla/Windows/gem5_old/benchmar/MiBench/X86/network/dijkstra/dijkstra_small;/media/shukla/Windows/gem5_old/benchmar/MiBench/X86/network/patricia/patricia" -o "/media/shukla/Windows/gem5_old/benchmar/MiBench/X86/network/dijkstra/input.dat;/media/shukla/Windows/gem5_old/benchmar/MiBench/X86/network/patricia/small.udp" --cpu-type=MinorCPU --cpu-clock=1GHz --num-cpu=2 --caches --l2cache --l1d_size=32kB --l1i_size=32kB --l2_size=512kB --l1d_assoc=2 --l1i_assoc=2 --l2_assoc=2
Thanks in advance

Facing error while running Docker on Tensorflow serving image

I have trained a Tensorflow Object detection model. I am trying to make a REST request using the tensorflow serving image on docker. (following instruction from https://github.com/tensorflow/serving )
TESTDATA="$(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/"
docker run -t --rm -p 8501:8501 \
-v "$TESTDATA/my_model:/models/work_place_safety" \
-e MODEL_NAME=work_place_safety \
tensorflow/serving &
I am facing below error message-
$ C:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Mount denied:
The source path "C:/Users/Desktop/models/serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_work_place_safety;C"
doesn't exist and is not known to Docker.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
I wonder why its including ";C" at the end of source path and throwing an error.
Any Help is much appreciated.
Thanks
resolved the issue by adding a / before $ in Git bash.
docker run -t --rm -p 8501:8501 \
-v /$TESTDATA/my_model:/models/my_model \
-e MODEL_NAME=my_model \
tensorflow/serving &
What is the value of my_model. Is it saved_model_work_place_safety.
Are you sure that your Saved Objection Detection Model is in the Folder, saved_model_work_place_safety and that Folder is in the path, $(pwd)/serving/tensorflow_serving/servables/tensorflow/testdata/?
If it is not inside testdata, you should mention the correct path, where saved_model_work_place_safety is present.
Folder structure should be something like this =>
saved_model_work_place_safety => 00000123 or 1556272508 or 1 => .pb file and Variables Folder.

CommandException: Caught non-retryable exception - aborting rsync

After using gsutil for more than 1 year I suddenly have this error:
.....
At destination listing 8350000...
At destination listing 8360000...
CommandException: Caught non-retryable exception - aborting rsync
.....
I tried to locate the files with this sync problem but I am not able to do so. Is there a "skip error" option of is there a way I can have gsutil more verbose?
My command line is like this:
gsutil -V -m rsync -d -r -U -P -C -e -x -x 'Download/*' /opt/ gs://mybucket1/kraanloos/
I have created a script to split the problem. This gives me more info for a solution
!#/bin/bash
array=(
3ware
AirTime
Amsterdam
BigBag
Download
guide
home
Install
Holding
Multimedia
newsite
Overig
Trak-r
)
for i in "${array[#]}"
do
echo Processing : $i
PROCESS="/usr/bin/gsutil -m rsync -d -r -U -P -C -e -x 'Backup/*' /opt/$i/ gs://mybucket1/kraanloos/$i/"
echo $PROCESS
$PROCESS
echo ""
echo ""
done
I've been struggling with the same problem the last few days. One way to make it super verbose is to put the -D flag before the rsync argument, as in:
gsutil -D rsync ...
By doing that, I found that my problem is due to having # characters in filenames, as in this question.
In my case, it was because of a broken link to a directory.
As blambert said, use the -D option to see exactly what file causes the problem.
I had struggled with this problem as well. I figured it out now.
you need to re-authenticate your Google Cloud SDK Shell and set a target project again.
It seems like rsync will not show the correct error message.
try cp instead, it will guide you to authentic and set the correct primary project
gsutil cp OBJECT_LOCATION gs://DESTINATION_BUCKET_NAME/
after that, your gsutil rsync should run fine.