0
0
Fork 0
mirror of https://github.com/yt-dlp/yt-dlp.git synced 2024-12-22 06:00:00 +00:00

[cleanup] Misc

This commit is contained in:
pukkandan 2022-08-15 03:15:05 +05:30
parent 49b4ceaedf
commit 1e4fca9a87
No known key found for this signature in database
GPG key ID: 7EEE9E1E817D0A39
10 changed files with 23 additions and 35 deletions

View file

@ -20,10 +20,10 @@ ### 2022.08.08
* `--compat-option no-live-chat` should disable danmaku
* Fix misleading DRM message
* Import ctypes only when necessary
* Minor bugfixes by [pukkandan](https://github.com/pukkandan)
* Reject entire playlists faster with `--match-filter` by [pukkandan](https://github.com/pukkandan)
* Minor bugfixes
* Reject entire playlists faster with `--match-filter`
* Remove filtered entries from `-J`
* Standardize retry mechanism by [pukkandan](https://github.com/pukkandan)
* Standardize retry mechanism
* Validate `--merge-output-format`
* [downloader] Add average speed to final progress line
* [extractor] Add field `audio_channels`
@ -31,7 +31,7 @@ ### 2022.08.08
* [ffmpeg] Set `ffmpeg_location` in a contextvar
* [FFmpegThumbnailsConvertor] Fix conversion from GIF
* [MetadataParser] Don't set `None` when the field didn't match
* [outtmpl] Smarter replacing of unsupported characters by [pukkandan](https://github.com/pukkandan)
* [outtmpl] Smarter replacing of unsupported characters
* [outtmpl] Treat empty values as None in filenames
* [utils] sanitize_open: Allow any IO stream as stdout
* [build, devscripts] Add devscript to set a build variant
@ -64,7 +64,7 @@ ### 2022.08.08
* [extractor/bbc] Fix news articles by [ajj8](https://github.com/ajj8)
* [extractor/camtasia] Separate into own extractor by [coletdjnz](https://github.com/coletdjnz)
* [extractor/cloudflarestream] Fix video_id padding by [haobinliang](https://github.com/haobinliang)
* [extractor/crunchyroll] Fix conversion of thumbnail from GIF by [pukkandan](https://github.com/pukkandan)
* [extractor/crunchyroll] Fix conversion of thumbnail from GIF
* [extractor/crunchyroll] Handle missing metadata correctly by [Burve](https://github.com/Burve), [pukkandan](https://github.com/pukkandan)
* [extractor/crunchyroll:beta] Extract timestamp and fix tests by [tejing1](https://github.com/tejing1)
* [extractor/crunchyroll:beta] Use streams API by [tejing1](https://github.com/tejing1)

View file

@ -28,12 +28,12 @@ ## [coletdjnz](https://github.com/coletdjnz)
[![gh-sponsor](https://img.shields.io/badge/_-Sponsor-red.svg?logo=githubsponsors&labelColor=555555&style=for-the-badge)](https://github.com/sponsors/coletdjnz)
* YouTube improvements including: age-gate bypass, private playlists, multiple-clients (to avoid throttling) and a lot of under-the-hood improvements
* Added support for downloading YoutubeWebArchive videos
* Added support for new websites MainStreaming, PRX, nzherald, etc
* Added support for new websites YoutubeWebArchive, MainStreaming, PRX, nzherald, Mediaklikk, StarTV etc
* Improved/fixed support for Patreon, panopto, gfycat, itv, pbs, SouthParkDE etc
## [Ashish0804](https://github.com/Ashish0804)
## [Ashish0804](https://github.com/Ashish0804) <sub><sup>[Inactive]</sup></sub>
[![ko-fi](https://img.shields.io/badge/_-Ko--fi-red.svg?logo=kofi&labelColor=555555&style=for-the-badge)](https://ko-fi.com/ashish0804)
@ -48,4 +48,5 @@ ## [Lesmiscore](https://github.com/Lesmiscore) (nao20010128nao)
**Monacoin**: mona1q3tf7dzvshrhfe3md379xtvt2n22duhglv5dskr
* Download live from start to end for YouTube
* Added support for new websites mildom, PixivSketch, skeb, radiko, voicy, mirrativ, openrec, whowatch, damtomo, 17.live, mixch etc
* Added support for new websites AbemaTV, mildom, PixivSketch, skeb, radiko, voicy, mirrativ, openrec, whowatch, damtomo, 17.live, mixch etc
* Improved/fixed support for fc2, YahooJapanNews, tver, iwara etc

View file

@ -146,7 +146,7 @@ ### Differences in default behavior
* Some private fields such as filenames are removed by default from the infojson. Use `--no-clean-infojson` or `--compat-options no-clean-infojson` to revert this
* When `--embed-subs` and `--write-subs` are used together, the subtitles are written to disk and also embedded in the media file. You can use just `--embed-subs` to embed the subs and automatically delete the separate file. See [#630 (comment)](https://github.com/yt-dlp/yt-dlp/issues/630#issuecomment-893659460) for more info. `--compat-options no-keep-subs` can be used to revert this
* `certifi` will be used for SSL root certificates, if installed. If you want to use system certificates (e.g. self-signed), use `--compat-options no-certifi`
* youtube-dl tries to remove some superfluous punctuations from filenames. While this can sometimes be helpful, it is often undesirable. So yt-dlp tries to keep the fields in the filenames as close to their original values as possible. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
* yt-dlp's sanitization of invalid characters in filenames is different/smarter than in youtube-dl. You can use `--compat-options filename-sanitization` to revert to youtube-dl's behavior
For ease of use, a few more compat options are available:
@ -1758,9 +1758,7 @@ #### youtube
* `comment_sort`: `top` or `new` (default) - choose comment sorting mode (on YouTube's side)
* `max_comments`: Limit the amount of comments to gather. Comma-separated list of integers representing `max-comments,max-parents,max-replies,max-replies-per-thread`. Default is `all,all,all,all`
* E.g. `all,all,1000,10` will get a maximum of 1000 replies total, with up to 10 replies per thread. `1000,all,100` will get a maximum of 1000 comments, with a maximum of 100 replies total
* `innertube_host`: Innertube API host to use for all API requests
* E.g. `studio.youtube.com`, `youtubei.googleapis.com`
* Note: Cookies exported from `www.youtube.com` will not work with hosts other than `*.youtube.com`
* `innertube_host`: Innertube API host to use for all API requests; e.g. `studio.youtube.com`, `youtubei.googleapis.com`. Note that cookies exported from one subdomain will not work on others
* `innertube_key`: Innertube API key to use for all API requests
#### youtubetab (YouTube playlists, channels, feeds, etc.)

View file

@ -301,7 +301,7 @@ class YoutubeDL:
should act on each input URL as opposed to for the entire queue
cookiefile: File name or text stream from where cookies should be read and dumped to
cookiesfrombrowser: A tuple containing the name of the browser, the profile
name/pathfrom where cookies are loaded, and the name of the
name/path from where cookies are loaded, and the name of the
keyring, e.g. ('chrome', ) or ('vivaldi', 'default', 'BASICTEXT')
legacyserverconnect: Explicitly allow HTTPS connection to servers that do not
support RFC 5746 secure renegotiation

View file

@ -1,17 +1,14 @@
from .common import InfoExtractor
from ..utils import (
clean_html,
float_or_none,
traverse_obj,
try_call,
)
# more info about jixie:
# [1] https://jixie.atlassian.net/servicedesk/customer/portal/2/article/1339654214?src=-1456335525,
# [2] https://scripts.jixie.media/jxvideo.3.1.min.js
from ..utils import clean_html, float_or_none, traverse_obj, try_call
class JixieBaseIE(InfoExtractor):
"""
API Reference:
https://jixie.atlassian.net/servicedesk/customer/portal/2/article/1339654214?src=-1456335525,
https://scripts.jixie.media/jxvideo.3.1.min.js
"""
def _extract_data_from_jixie_id(self, display_id, video_id, webpage):
json_data = self._download_json(
'https://apidam.jixie.io/api/public/stream', display_id,

View file

@ -1,7 +1,5 @@
from .jixie import JixieBaseIE
# Video from video.kompas.com seems use jixie player
class KompasVideoIE(JixieBaseIE):
_VALID_URL = r'https?://video\.kompas\.com/\w+/(?P<id>\d+)/(?P<slug>[\w-]+)'

View file

@ -325,7 +325,7 @@ def _real_extract(self, url):
airings = self._download_json(
f'https://search-api-mlbtv.mlb.com/svc/search/v2/graphql/persisted/query/core/Airings?variables=%7B%22partnerProgramIds%22%3A%5B%22{video_id}%22%5D%2C%22applyEsniMediaRightsLabels%22%3Atrue%7D',
video_id)['data']['Airings']
formats, subtitles = [], {}
for airing in airings:
m3u8_url = self._download_json(

View file

@ -1,8 +1,5 @@
import json
from .common import InfoExtractor
from .youtube import YoutubeIE
from ..utils import (
clean_html,
format_field,

View file

@ -1169,7 +1169,7 @@ def _real_extract(self, url):
'id': clip.get('id') or video_id,
'_old_archive_ids': [make_archive_id(self, old_id)] if old_id else None,
'display_id': video_id,
'title': clip.get('title') or video_id,
'title': clip.get('title'),
'formats': formats,
'duration': int_or_none(clip.get('durationSeconds')),
'view_count': int_or_none(clip.get('viewCount')),

View file

@ -2,10 +2,7 @@
from uuid import uuid4
from .common import InfoExtractor
from ..compat import (
compat_HTTPError,
compat_str,
)
from ..compat import compat_HTTPError, compat_str
from ..utils import (
ExtractorError,
int_or_none,