yt-dlp : How can I have separate file format for playlists and single videos - config

I have an urls file where I store all links I want to download. I download from it using yt-dlp -a urls.
For playlists I'd like to put videos into a folder, like it's shown in documentation examples, while for single videos I'd like to keep the separate file format though.
How can I do it with a single configuration file? If it's not possible with a single configuration file, I'd want to have other options in a single common place.
Note: In documentation I saw TYPES: in --output, but I don't know how to use it and if it's applicable to my case.
PS: If it's possible to write a wrapper using yt-dlp as a library, with a hook, it'd be a viable options

Related

dropbox API searchFileNames

Current Dropbox API searchFileNames() allows the searching of a query / substring, so if I wanted to look for filenames in a folder that were of a specific extension such as ".jpg", that works, but if I combine to look for ".jpg .png" I get nothing returned... as the documentation states 'A file matches only if it contains all the substrings.'
Is there another API that will allow a union of the searched vs the exclusion?
Thanks
No, the Dropbox API doesn't enable you to search for multiple file extensions at once like this, but I'll be sure to pass this along as a feature request.
As a workaround, you can split this into multiple API calls.

What's the difference between 'pelican ./content' and 'make html'?

I start studying Pelican today because I want to move my blog from wordpress to pelican.
However, after reading the docs, I still don't know the difference between pelican ./content and make html. They both seem to generate a static website. Besides, pelican ./content always returns a UnicodeDecodeError** for me, whilepelican ./content` does not.
What's the difference between them and why?
In the folder where you use $ pelican-quickstart, you will find a file named Makefile.
You will find a line like this html: clean $(OUTPUTDIR)/index.html, and $(OUTPUTDIR)/%.html:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(CONFFILE) $(PELICANOPTS).
This file show you what pelican have done when you type make ***, and you can config many other things in this file.
pelican ./content runs the generation of the website using defaults and trying to guess the location of your content, output and configuration files.
make html calls pelican, but explicitly gives it the input directory, the output directory, the configuration file and, optionally, some extra options.
Basically, make html (along with make regenerate) are convenience methods that make the job a bit easier for you. In any case, you should run make publish to generate the content that is to be uploaded to your web server, as it loads the publishconf.py files, which define a few extra options (the rss feeds) and allows you to change settings for the "proper" website.

Runtime xml/json generation of directory content

For an ios app I try to develop, I need to get a listing of the content of an online directory. (My app works with local directories but I'm trying to edit it so it works with an online directory)
I've been looking into this a lot but I can't manage to find the best solution to do this. (I learned you can't read the contents of an online directory and subdirectories with objective-c and print them to an array to display them in, for example, a tableview).
I did learn how to create a connection and output the html of a certain page (or an xml file). That's why I was wondering... Is there a way (webservice?) to generate an xml/JSON/html file that prints the content of a directory (if possible also subdirectories)? The generation of this xml/JSON/html has to be done at runtime, the moment my app asks for the file, since people will be able to add files (pdf's, video's,...) to the directory via FTP. (editing the xml/JSON file everytime is not an option).
Any help would be very much appreciated.
You could create a simple webservice in many ways. For example, with PHP you could write something like this:
<?php
$dir = '/myDir';
$files = scandir($dir);
echo json_encode($files);
?>
Then just point your app to get the contents of that page and parse the JSON.
scandir: http://php.net/manual/en/function.scandir.php
json_encode: http://php.net/manual/en/function.json-encode.php

Mock of Ext JS 4 documentation

I have been researching for various type of documentation options for our products. I thought it would be cool to have Ext JS 4 Docs type of look&feel rather than Twiki.
But I am having tough time to understand the current docs page in Ext JS 4.0.7. Each and every directory has README.js & README.md. If I am not wrong, I have to create my custom documentation in README.md, but I am failing to understand how that would be converted into README.js.
Could someone let me know how to .md file can be converted into .js?
They are using a self made tool, which is called JSDuck
I guess you need to run it over your .md files and it will generate the .js files for you.
JsDuck is the way to go. I'm using it to build some custom documentation.
To get the guides:
just create a json file. I usually call it guides.json and write out the structure. It is well defined in the link below.
Create a folder in the same location as the guides file and in it:
create a folder for each entry in the json. (each entry has a name field and this should be the name of the folder)
Create the file readme.md
Add an icon to it. It should be names icon-lg.png
When using jsduck from the command line, add the following to the arguments:
--guides=[the path to your guides.json file]
More information can be found here:
https://github.com/senchalabs/jsduck/wiki/Advanced-Usage (go to the guides section)
Also, to get more information on the commandline parameters, use the command
jsduck-3.2.1 --help=full
Hope this works for you.

How do many websites hide their file structure?

When I look at many large sites (e.g. wikipedia or this site) the urls looks like this:
http://en.wikipedia.org/wiki/StackOverflow
And not like:
http://en.wikipedia.org/wiki.php?article=StackOverflow
http://en.wikipedia.org/wiki.pl?article=StackOverflow
... or even
http://en.wikipedia.org/wiki?article=StackOverflow
I suppose that wikipedia does not create a separate file for every article (and then use apache modules like mod_rewrite to hide the file extensions).
But how do they do this? Are they using a special server? Is there a way to configure apache to act like this? For example one script is called by every request and the path of the request is transmitted to the script, which will decide what to print.
These are called Friendly or Clean Urls .
Have a look at
http://en.wikipedia.org/wiki/Rewrite_engine
http://en.wikipedia.org/wiki/Clean_URL
http://www.petefreitag.com/item/503.cfm