unsupported format character error in youtube-dl - syntax-error

i've been trying to download a youtube playlist using youtube-dl, but i had a problem in the output template, i used the following command to get the playlist downloaded in an organised way :
youtube-dl -f mp4 -o "Desktop/mainFolder/courses/%(playlist_title)s-%(playlist_uploader)/%(title)s.%(ext)s" --embed-thumbnail --add-metadata --mark-watched https://www.youtube.com/playlist?list=PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba
but i kept getting the following error :
ERROR: Error in output template: unsupported format character '/' (0x2f) at index 66 (encoding: 'UTF-8')
it states that i used an unsupported character which is '/', weirdly enough i used almost the same output template format in a previous download which is :
youtube-dl -f mp4 -o "Desktop/mainFolder/courses/%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s" --add-metadata https://www.youtube.com/playlist?list=PL4C9296DF81B9EF13
and it worked just fine.
what did i did differently here so that the first command didn't work but the last one did ??

If you see this error it probably means one of the format expressions doesn't end with an s.
In this case, it looks like you're missing one after (playlist_uploader).
youtube-dl -f mp4 -o "Desktop/mainFolder/courses/%(playlist_title)s-%(playlist_uploader)s/%(title)s.%(ext)s" --embed-thumbnail --add-metadata --mark-watched https://www.youtube.com/playlist?list=PLzH6n4zXuckpfMu_4Ff8E7Z1behQks5ba

Related

Error in using nco ncremap to remap one netcdf file to grid of another

I have a data set with multiple netcdf files with the same variables and structure, though the grid shifts in the time series periodically. I am working on simply remapping one file to another. However, the following command when run with the linked data files, 2016090618.nc and 2016090712.nc:
ncremap -d 2016090618.nc -i 2016090712.nc -o outputfile_2016090712.nc
results in the following error:
Input #00: /content/drive/MyDrive/2016090712.nc
Grid(src): /tmp/ncremap_tmp_grd_src.nc.pid198744
Grid(dst): /tmp/ncremap_tmp_grd_dst.nc.pid198744
Map/Wgt : /tmp/ncremap_tmp_map_nco_nco_con.nc.pid198744
ncks: ERROR nco_rgr_wgt() reports frc_out == frac_b contains all zeros
ncremap: ERROR Failed to horizontally regrid. cmd_rgr[0] failed. Debug this:
ncks -O -t 2 --no_tmp_fl --gaa remap_script=ncremap --gaa remap_command="'/usr/bin/ncremap -d 2016090618.nc -i 2016090712.nc -o outputfile_2016090712.nc'" --gaa remap_hostname=e3d132815114 --gaa remap_version=4.9.1 --hdr_pad=10000 --rgr lat_nm_out=lat#lon_nm_out=lon --map="/tmp/ncremap_tmp_map_nco_nco_con.nc.pid198744" "/content/drive/MyDrive/Projects/20220014_CMM3/RDRS_input_data/CaPA_coarse/2016090712.nc" "outputfile_2016090712.nc"
This is being run in Google Colab with nco installed (thus the /content/drive/MyDrive path, and I omitted the exclamation mark from the ncremap command above).
I have tried to unpack the data with the -U flag and looked at the -R argument to no avail.
Incidentally, the cdo command below works fine to remap the file, but results in changes of the variable organization and naming that doesn't work well for my purposes, so I am trying to solve this with nco.
cdo remapbil,2016090618.nc 2016090712.nc outputfile_2016090712.nc
The good news is that newer versions of NCO do not die like you show above, so you might try upgrading to NCO 5.1.4:
zender#spectral:~/Downloads$ ncremap --version
ncremap, the NCO regridder and grid, map, and weight-generator, version 5.1.5-alpha02 "Champignons"
...
zender#spectral:~/Downloads$ ncremap -d 2016090618.nc -i 2016090712.nc -o outputfile_2016090712.nc
Input #00: /Users/zender/Downloads/2016090712.nc
Grid(src): /var/folders/ct/rzzvxlqn4_3f9cr8wgn2pm480000gn/T/ncremap_tmp_grd_src.nc.pid33012
Grid(dst): /var/folders/ct/rzzvxlqn4_3f9cr8wgn2pm480000gn/T/ncremap_tmp_grd_dst.nc.pid33012
Map/Wgt : /var/folders/ct/rzzvxlqn4_3f9cr8wgn2pm480000gn/T/ncremap_tmp_map_nco_nco_con.nc.pid33012
zender#spectral:~/Downloads$
The bad news is that the input files, and thus the output file, all contain NaN values. NCO does not like NaN for reasons described here. So I cannot tell whether it works as intended. BTW, if you want bilinear rather than conservative regridding, then use ncremap --alg_typ=bilinear ....

Issues converting a small Hex value to a Binary value

I am trying to take the contents of a file that has a Hex number and convert that number to Binary and output to a file.
This is what I am trying but not getting the binary value:
xxd -r -p Hex.txt > Binary.txt
The contents of Hex.txt is: ff
I have also tried FF and 0xFF, but would like to just use ff since the device I am pulling the info from has it in that format.
Instead of 11111111 which it should be, I get a y with 2 dots above it.
If I change it to ee, I get an i with 2 dots. It seems to be reading it just fine but according to what I have read on the xxd -r -p command, it is not outputing it in the correct format.
The other ways I have found to convert Hex to Binary have either also not worked or is a pretty big Bash script that seems unnecessary to do what I thought would be a simple task.
This also gives me the y with 2 dots.
$ for i in $(cat Hex.txt) ; do printf "\x$i" ; done > Binary.txt
For some reason almost every solution I find gives me this format instead of a human readable Binary value with 1s and 0s.
Any help is appreciated. I am planning on using this in a script to pull the Relay values from Digital Loggers devices using curl and giving Home Assistant a readable file to record the Relay State. Digital Loggers curl cmd gives the state of all 8 relays at once using Hex instead of being able to pull the status of a specific relay.
If "file.txt" contains:
fe
0a
and you run this:
perl -ane 'printf("%08b\n",hex($_))' file.txt
You'll get this:
11111110
00001010
If you use it a lot, you might want to make a bash function of it in your login profile along these lines - being extremely respectful of spaces and semi-colons that might look unnecessary:
bin(){ perl -ane 'printf("%08b\n",hex($_))' $1 ; }
Then you'll be able to do:
bin file.txt
If you dislike Perl for some reason, you can achieve something similar without it as follows:
tr '[:lower:]' '[:upper:]' < file.txt |
while read h ; do
echo "obase=2; ibase=16; $h" | bc
done

How to Edit a text from the output in DCL -- OpenVMS scripting

I wrote the below code, which will extract the directory name along with the file name and I will use purge command on that extracted Text.
$ sear VAXMANAGERS_ROOT:[PROC]TEMP.LIS LOG/out=VAXMANAGERS_ROOT:[DEV]FVLIM.TXT
$ OPEN IN VAXMANAGERS_ROOT:[DEV]FVLIM.TXT
$ LOOP:
$ READ/END_OF_FILE=ENDIT IN ABCD
$ GOTO LOOP
$ ENDIT:
$ close in
$ ERROR=F$EXTRACT(0,59,ABCD)
$ sh sym ERROR
$ purge/keep=1 'ERROR'
The output is as follows:
ERROR = "$1$DKC102:[PROD_LIVE.LOG]DP2017_TMP2.LIS;27392 "
Problem here is --- Every time the directory length varies (Length may be 59 or 40 or some other value, but the directory and filename length will not exceed 59 characters in my system). So in the above output, the system is also fetching the Version number of that file number. So I am not able to purge the file along with the version number.
%PURGE-E-PURGEVER, version numbers not permitted
Any suggestion -- How to eliminate the version number from the output ?
I cannot use the exact length of the directory, as directory length varies everytime.... :(
The answer with F$ELEMENT( 0, ";", ABCD ) should work, as confirmed. I might script something like this:
$ ERROR = F$PARSE(";",ERROR) ! will return $1$DKC102:[PROD_LIVE.LOG]DP2017_TMP2.LIS;
$ ERROR = ERROR - ";"
$ PURGE/KEEP=1 'ERROR'
Not sure why you have the read loop. What you will get is the last line in the file, but assuming that's what you want.
While HABO explained it, some more explanations
Suppose I use f$search to check if a file exists
a = f$search("sys$manager:net$server.log")
then I find I it exists
wr sys$output a
shows
SYS$SYSROOT:[SYSMGR]NET$SERVER.LOG;9
From the help of f$parse I get
help lex f$parse arg
shows, among other things
`Specifies a character string containing the name of a field
in a file specification. Specifying the field argument causes
the F$PARSE function to return a specific portion of a file
specification.
Specify one of the following field names (do not abbreviate):
NODE Node name
DEVICE Device name
DIRECTORY Directory name
NAME File name
TYPE File type
VERSION File version number`
So I can do
wr sys$output f$parse(a,,,"DEVICE")
which shows
SYS$SYSROOT:
and also
wr sys$output f$parse(a,,,"DIRECTORY")
so I get
[SYSMGR]
and
wr sys$output f$parse(a,,,"NAME")
shows
NET$SERVER
and
wr sys$output f$parse(a,,,"TYPE")
shows
.LOG
the version is
wr sys$output f$parse(a,,,"VERSION")
shown as
;9
The lexicals functions can be handy, check it using
help lexical
it shows
F$CONTEXT F$CSID F$CUNITS F$CVSI F$CVTIME F$CVUI F$DELTA_TIME F$DEVICE F$DIRECTORY F$EDIT
F$ELEMENT F$ENVIRONMENT F$EXTRACT F$FAO F$FID_TO_NAME F$FILE_ATTRIBUTES F$GETDVI F$GETENV
F$GETJPI F$GETQUI F$GETSYI F$IDENTIFIER F$INTEGER F$LENGTH F$LICENSE F$LOCATE F$MATCH_WILD
F$MESSAGE F$MODE F$MULTIPATH F$PARSE F$PID F$PRIVILEGE F$PROCESS F$READLINK F$SEARCH
F$SETPRV F$STRING F$SYMLINK_ATTRIBUTES F$TIME F$TRNLNM F$TYPE F$UNIQUE F$USER

hide error messages in dcl script

I have a test script I'm running that generates some errors,shown below, I expect these errors. Is there anyway I can prevent them from showing on the screen however? I use the
$ write sys$output
to display if there is an expected error.
I tried to use
$ DEFINE SYS$ERROR ERROR.LOG
but this then changed my entire error output log to this, if this is the correct way to handle it can I unset this at the end of my script somehow?
[error example]
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-E-OPENIN, error opening TEST$DISK:[AAA]NOTTHERE.TXT; as input
-RMS-E-FNF, file not found
%DCL-W-UNDFIL, file has not been opened by DCL - check logical name
DEFINE/USER creates a logical name that disappears when the next image exits.
So if you use that just before a command just to protect that command, then fine.
Otherwise I would prefer SET MESSAGE to control the output.
And of course yoy want to grab $STATUS and verify it after the command for success or for the expected error, reporting any unexpected error.
Better still... if you expect certain error conditions to occur,
then why not test for them?
For example:
$ file = F$SEARCH("TEST$DISK:[AAA]NOTTHERE.TXT")
$ IF file.NES."" THEN TYPE 'file'
Cheers,
Hein
To suppress Error message inside a script. try this command
$ DEFINE/USER SYS$ERROR NL:
NL: is a null device, so you don`t see any error messages displayed on your terminal.
good luck
This works interactively and in batch.
$ SET MESSAGE /NOTEXT /NOSEV /NOFAC /NOID
$ <DCL_Command>
$ SET MESSAGE /TEXT /SEV /FAC/ ID

Unexpected error while loading data

I am getting an "Unexpected" error. I tried a few times, and I still could not load the data. Is there any other way to load data?
gs://log_data/r_mini_raw_20120510.txt.gzto567402616005:myv.may10c
Errors:
Unexpected. Please try again.
Job ID: job_4bde60f1c13743ddabd3be2de9d6b511
Start Time: 1:48pm, 12 May 2012
End Time: 1:51pm, 12 May 2012
Destination Table: 567402616005:myvserv.may10c
Source URI: gs://log_data/r_mini_raw_20120510.txt.gz
Delimiter: ^
Max Bad Records: 30000
Schema:
zoneid: STRING
creativeid: STRING
ip: STRING
update:
I am using the file that can be found here:
http://saraswaticlasses.net/bad.csv.zip
bq load -F '^' --max_bad_record=30000 mycompany.abc bad.csv id:STRING,ceid:STRING,ip:STRING,cb:STRING,country:STRING,telco_name:STRING,date_time:STRING,secondary:STRING,mn:STRING,sf:STRING,uuid:STRING,ua:STRING,brand:STRING,model:STRING,os:STRING,osversion:STRING,sh:STRING,sw:STRING,proxy:STRING,ah:STRING,callback:STRING
I am getting an error "BigQuery error in load operation: Unexpected. Please try again."
The same file works from Ubuntu while it does not work from CentOS 5.4 (Final)
Does the OS encoding need to be checked?
The file you uploaded has an unterminated quote. Can you delete that line and try again? I've filed an internal bigquery bug to be able to handle this case more gracefully.
$grep '"' bad.csv
3000^0^1.202.218.8^2f1f1491^CN^others^2012-05-02 20:35:00^^^^^"Mozilla/5.0^generic web browser^^^^^^^^
When I run a load from my workstation (Ubuntu), I get a warning about the line in question. Note that if you were using a larger file, you would not see this warning, instead you'd just get a failure.
$bq show --format=prettyjson -j job_e1d8636e225a4d5f81becf84019e7484
...
"status": {
"errors": [
{
"location": "Line:29057 / Field:12",
"message": "Missing close double quote (\") character: field starts with: <Mozilla/>",
"reason": "invalid"
}
]
My suspicion is that you have rows or fields in your input data that exceed the 64 KB limit. Perhaps re-check the formatting of your data, check that it is gzipped properly, and if all else fails, try importing uncompressed data. (One possibility is that the entire compressed file is being interpreted as a single row/field that exceeds the aforementioned limit.)
To answer your original question, there are a few other ways to import data: you could upload directly from your local machine using the command-line tool or the web UI, or you could use the raw API. However, all of these mechanisms (including the Google Storage import that you used) funnel through the same CSV parser, so it's possible that they'll all fail in the same way.