Ffmpeg http error 400 bad request

ffmpeg http error 400 bad request

But it showed an error like this: HTTP error Bad Request. [hls @ fc] Unable to open key file. The "Request header too large" message is thrown with an HTTP error code This error occurs if the size of the request header has grown. T INFO: ffmpeg[2ADC]: [hls,applehttp] T DEBUG: ffmpeg[2ADC]: [https] HTTP error Bad Request.

Ffmpeg http error 400 bad request - have hit

vlc - HowdoIdownloadonlynewvideosfromaplaylist? Use download-archive feature. With this feature you should initially download the complete playlist with --download-archive /path/to/download/archive/sprers.eu that will record identifiers of all the videos in a special file. Each subsequent run with the same --download-archive will download only new videos and skip all videos that have been downloaded before. Note that only successful downloads are recorded in the file. For example, at first, youtube-dl --download-archive sprers.eu "sprers.eu?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re" will download the complete PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re playlist and create a file sprers.eu Each subsequent run will only download new videos if any: youtube-dl --download-archive sprers.eu "sprers.eu?list=PLwiyx1dc3P2JR9N8gQaQN_BCvlSlap7re" ShouldIadd--hls-prefer-native into my config? When youtube-dl detects an HLS video, it can download it either with the built-in downloader or ffmpeg. Since many HLS streams are slightly invalid and ffmpeg/youtube-dl each handle some invalid cases better than the other, there is an option to switch the downloader if needed. When youtube-dl knows that one particular downloader works better for a given website, that downloader will be picked. Otherwise, youtube-dl will pick the best downloader for general compatibility, which at the moment happens to be ffmpeg. This choice may change in future versions of youtube-dl, with improvements of the built-in downloader and/or ffmpeg. In particular, the generic extractor (used when your website is not in the list of supported sites by youtube-dl (sprers.eu) cannot mandate one specific downloader. If you put either --hls-prefer-native or --hls-prefer-ffmpeg into your configuration, a different subset of videos will fail to download correctly. Instead, it is much better to file an issue (sprers.eu) or a pull request which details why the native or the ffmpeg HLS downloader is a better choice for your use case. Canyouaddsupportforthisanimevideosite,orsitewhichshowscurrentmoviesforfree? As a matter of policy (as well as legality), youtube-dl does not include support for services that specialize in infringing copyright. As a rule of thumb, if you cannot easily find a video that the service is quite obviously allowed to distribute (i.e. that has been uploaded by the creator, the creator's distributor, or is published under a free license), the service is probably unfit for inclusion to youtube-dl. A note on the service that they don't host the infringing content, but just link to those who do, is evidence that the service should not be included into youtube-dl. The same goes for any DMCA note when the whole front page of the service is filled with videos they are not allowed to distribute. A "fair use" note is equally unconvincing if the service shows copyright-protected videos in full without authorization. Support requests for services that do purchase the rights to distribute their content are perfectly fine though. If in doubt, you can simply include a source that mentions the legitimate purchase of content. HowcanIspeedupworkonmyissue? (Also known as: Help, my important issue not being solved!) The youtube-dl core developer team is quite small. While we do our best to solve as many issues as possible, sometimes that can take quite a while. To speed up your issue, here's what you can do: First of all, please do report the issue at our issue tracker (sprers.eu). That allows us to coordinate all efforts by users and developers, and serves as a unified point. Unfortunately, the youtube-dl project has grown too large to use personal email as an effective communication channel. Please read the bug reporting instructions below. A lot of bugs lack all the necessary information. If you can, offer proxy, VPN, or shell access to the youtube-dl developers. If you are able to, test the issue from multiple computers in multiple countries to exclude local censorship or misconfiguration issues. If nobody is interested in solving your issue, you are welcome to take matters into your own hands and submit a pull request (or coerce/pay somebody else to do so). Feel free to bump the issue from time to time by writing a small comment ("Issue is still present in youtube-dl version from France, but fixed from Belgium"), but please not more than once a month. Please do not declare your issue as important or urgent. HowcanIdetectwhetheragivenURLissupportedbyyoutube-dl? For one, have a look at the list of supported sites (docs/sprers.eu). Note that it can sometimes happen that the site changes its URL scheme (say, from sprers.eu to sprers.eu ) and youtube-dl reports an URL of a service in that list as unsupported. In that case, simply report a bug. It is not possible to detect whether a URL is supported or not. That's because youtube-dl contains a generic extractor which matches all URLs. You may be tempted to disable, exclude, or remove the generic extractor, but the generic extractor not only allows users to extract videos from lots of websites that embed a video from another service, but may also be used to extract video from a service that it's hosting itself. Therefore, we neither recommend nor support disabling, excluding, or removing the generic extractor. If you want to find out whether a given URL is supported, simply call youtube-dl with it. If you get no videos back, chances are the URL is either not referring to a video or unsupported. You can find out which by examining the output (if you run youtube-dl on the console) or catching an UnsupportedError exception if you run it from a Python program.

WhydoIneedtogothroughthatmuchredtapewhenfilingbugs?

Before we had the issue template, despite our extensive bug reporting instructions, about 80% of the issue reports we got were useless, for instance because people used ancient versions hundreds of releases old, because of simple syntactic errors (not in youtube-dl but in general shell usage), because the problem was already reported multiple times before, because people did not actually read an error message, even if it said "please install ffmpeg", because people did not mention the URL they were trying to download and many more simple, easy-to-avoid problems, many of whom were totally unrelated to youtube-dl. youtube-dl is an open-source project manned by too few volunteers, so we'd rather spend time fixing bugs where we are certain none of those simple problems apply, and where we can be reasonably confident to be able to reproduce the issue without asking the reporter repeatedly. As such, the output of youtube-dl -v YOUR_URL_HERE is really all that's required to file an issue. The issue template also guides you through some basic steps you can do, such as checking that your version of youtube-dl is current.

DEVELOPERINSTRUCTIONS

Most users do not need to build youtube-dl and can download the builds (https://ytdl- sprers.eu) or get them from their distribution. To run youtube-dl as a developer, you don't need to build anything either. Simply execute python -m youtube_dl To run the test, simply invoke your favorite test runner, or execute a test file directly; any of the following work: python -m unittest discover python test/test_sprers.eu nosetests See item 6 of new extractor tutorial for how to run extractor specific test cases. If you want to create a build of youtube-dl yourself, you'll need · python · make (only GNU make is supported) · pandoc · zip · nosetests Addingsupportforanewsite If you want to add support for a new site, first of all makesure this site is notdedicatedtocopyrightinfringement(sprers.eu#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free). youtube-dl does notsupport such sites thus pull requests adding support for them willberejected. After you have ensured this site is distributing its content legally, you can follow this quick list (assuming your service is called yourextractor): 1. Fork this repository (sprers.eu) 2. Check out the source code with: git clone [email protected]:YOUR_GITHUB_USERNAME/sprers.eu 3. Start a new git branch with cd youtube-dl git checkout -b yourextractor 4. Start with this simple template and save it to youtube_dl/extractor/sprers.eu: # coding: utf-8 from __future__ import unicode_literals from .common import InfoExtractor class YourExtractorIE(InfoExtractor): _VALID_URL = r'https?://(?:www\.)?yourextractor\.com/watch/(?P<id>[]+)' _TEST = { 'url': 'sprers.eu', 'md5': 'TODO: md5 sum of the first bytes of the video file (use --test)', 'info_dict': { 'id': '42', 'ext': 'mp4', 'title': 'Video title goes here', 'thumbnail': r're:^https?://.*\.jpg$', # TODO more properties, either as: # * A value # * MD5 checksum; start the string with md5: # * A regular expression; start the string with re: # * Any Python type (for example int or float) } } def _real_extract(self, url): video_id = self._match_id(url) webpage = self._download_webpage(url, video_id) # TODO more code goes here, for example title = self._html_search_regex(r'<h1>(.+?)</h1>', webpage, 'title') return { 'id': video_id, 'title': title, 'description': self._og_search_description(webpage), 'uploader': self._search_regex(r'<div[^>]+id="uploader"[^>]*>([^<]+)<', webpage, 'uploader', fatal=False), # TODO more properties (see youtube_dl/extractor/sprers.eu) } 5. Add an import in youtube_dl/extractor/sprers.eu (sprers.eu org/youtube-dl/blob/master/youtube_dl/extractor/sprers.eu). 6. Run python test/test_sprers.eu sprers.eu_YourExtractor. This shouldfail at first, but you can continually re-run it until you're done. If you decide to add more than one test, then rename _TEST to _TESTS and make it into a list of dictionaries. The tests will then be named sprers.eu_YourExtractor, sprers.eu_YourExtractor_1, sprers.eu_YourExtractor_2, etc. Note that tests with only_matching key in test's dict are not counted in. 7. Have a look at youtube_dl/extractor/sprers.eu (sprers.eu dl/blob/master/youtube_dl/extractor/sprers.eu) for possible helper methods and a detailed description of what your extractor should and may return (sprers.eu dl/blob/7f41ab3fba1bcabde64aaa3c8/youtube_dl/extractor/sprers.eu#LL). Add tests and code for as many as you want. 8. Make sure your code follows youtube-dl coding conventions and check the code with flake8 (sprers.eu#quickstart): $ flake8 youtube_dl/extractor/sprers.eu 9. Make sure your code works under all Python (sprers.eu) versions claimed supported by youtube-dl, namely , , and +. When the tests pass, add (sprers.eu) the new files and commit (sprers.eu) them and push (sprers.eu push) the result, like this: $ git add youtube_dl/extractor/sprers.eu $ git add youtube_dl/extractor/sprers.eu $ git commit -m '[yourextractor] Add new extractor' $ git push origin yourextractor Finally, create a pull request (sprers.eu request). We'll then review and merge it. In any case, thank you very much for your contributions! youtube-dlcodingconventions This section introduces a guide lines for writing idiomatic, robust and future-proof extractor code. Extractors are very fragile by nature since they depend on the layout of the source data provided by 3rd party media hosters out of your control and this layout tends to change. As an extractor implementer your task is not only to write code that will extract media links and metadata correctly but also to minimize dependency on the source's layout and even to make the code foresee potential future changes and be ready for that. This is important because it will allow the extractor not to break on minor layout changes thus keeping old youtube-dl versions working. Even though this breakage issue is easily fixed by emitting a new version of youtube-dl with a fix incorporated, all the previous versions become broken in all repositories and distros' packages that may not be so prompt in fetching the update from us. Needless to say, some non rolling release distros may never receive an update at all. Mandatoryandoptionalmetafields For extraction to work youtube-dl relies on metadata your extractor extracts and provides to youtube-dl expressed by an information dictionary (sprers.eu dl/blob/7f41ab3fba1bcabde64aaa3c8/youtube_dl/extractor/sprers.eu#LL) or simply infodict. Only the following meta fields in the infodict are considered mandatory for a successful extraction process by youtube-dl: · id (media identifier) · title (media title) · url (media download URL) or formats In fact only the last option is technically mandatory (i.e. if you can't figure out the download location of the media the extraction does not make any sense). But by convention youtube-dl also treats id and title as mandatory. Thus the aforementioned metafields are the critical data that the extraction does not make any sense without and if any of them fail to be extracted then the extractor is considered completely broken. Any field (sprers.eu dl/blob/7f41ab3fba1bcabde64aaa3c8/youtube_dl/extractor/sprers.eu#LL) apart from the aforementioned ones are considered optional. That means that extraction should be tolerant to situations when sources for these fields can potentially be unavailable (even if they are always available at the moment) and future-proof in order not to break the extraction of general purpose mandatory fields. Example Say you have some source dictionary meta that you've fetched as JSON with HTTP request and it has a key summary: meta = self._download_json(url, video_id) Assume at this point meta's layout is: { "summary": "some fancy summary text", } Assume you want to extract summary and put it into the resulting info dict as description. Since description is an optional meta field you should be ready that this key may be missing from the meta dict, so that you should extract it like: description = sprers.eu('summary') # correct and not like: description = meta['summary'] # incorrect The latter will break extraction process with KeyError if summary disappears from meta at some later time but with the former approach extraction will just go ahead with description set to None which is perfectly fine (remember None is equivalent to the absence of data). Similarly, you should pass fatal=False when extracting optional data from a webpage with _search_regex, _html_search_regex or similar methods, for instance: description = self._search_regex( r'<span[^>]+id="title"[^>]*>([^<]+)<', webpage, 'description', fatal=False) With fatal set to False if _search_regex fails to extract description it will emit a warning and continue extraction. You can also pass default=<some fallback value>, for example: description = self._search_regex( r'<span[^>]+id="title"[^>]*>([^<]+)<', webpage, 'description', default=None) On failure this code will silently continue the extraction with description set to None. That is useful for metafields that may or may not be present. Providefallbacks When extracting metadata try to do so from multiple sources. For example if title is present in several places, try extracting from at least some of them. This makes it more future-proof in case some of the sources become unavailable. Example Say meta from the previous example has a title and you are about to extract it. Since title is a mandatory meta field you should end up with something like: title = meta['title'] If title disappears from meta in future due to some changes on the hoster's side the extraction would fail since title is mandatory. That's expected. Assume that you have some another source you can extract title from, for example og:title HTML meta of a webpage. In this case you can provide a fallback scenario: title = sprers.eu('title') or self._og_search_title(webpage) This code will try to extract from meta first and if it fails it will try extracting og:title from a webpage. RegularexpressionsDon'tcapturegroupsyoudon'tuse Capturing group must be an indication that it's used somewhere in the code. Any group that is not used must be non capturing. Example Don't capture id attribute name here since you can't use it for anything anyway. Correct: r'(?:id today)[+-][](day week avi) --postprocessor-argsARGS Give these arguments to the postprocessor -k,--keep-video Keep the video file on disk after the post- processing; the video is erased by default --no-post-overwrites Do not overwrite post-processed files; the post-processed files are overwritten by default --embed-subs Embed subtitles in the video (only for mp4, webm and mkv videos) --embed-thumbnail Embed thumbnail in the audio as cover art --add-metadata Write metadata to the video file --metadata-from-titleFORMAT Parse additional metadata like song title / artist from the video title. The format syntax is the same as --output. Regular expression with named capture groups may also be used. The parsed parameters replace existing values. Example: --metadata-from- title "%(artist)s - %(title)s" matches a title like "Coldplay - Paradise". Example (regex): --metadata-from-title "(?P.+?) - (?P .+)" --xattrs Write metadata to the video file's xattrs (using dublin core and xdg standards) --fixupPOLICY Automatically correct known faults of the file. One of never (do nothing), warn (only emit a warning), detect_or_warn (the default; fix file if we can, warn otherwise) --prefer-avconv Prefer avconv over ffmpeg for running the postprocessors --prefer-ffmpeg Prefer ffmpeg over avconv for running the postprocessors (default) --ffmpeg-locationPATH Location of the ffmpeg/avconv binary; either the path to the binary or its containing directory. --execCMD Execute a command on the file after downloading, similar to find's -exec syntax. Example: --exec 'adb push {} /sdcard/Music/ && rm {}' --convert-subsFORMAT Convert the subtitles to other format (currently supported: srt month ffmpeg http error 400 bad request

0 Comments

Leave a Comment