[Edit: I was able to figure out a workaround for this, see below.]

我正在尝试从S3流式传输多个远程MP4剪辑,将它们按顺序播放为一个连续视频(以便在剪辑内和剪辑之间进行清理),不会出现任何卡顿现象,而无需先将其明确下载到设备中 . 但是,我发现剪辑缓冲区非常缓慢(即使在快速的网络连接上),也无法找到解决问题的适当方法 .

我一直在尝试使用 AVPlayer ,因为 AVPlayerAVMutableComposition 播放提供的视频轨道作为一个连续的轨道(不像 AVQueuePlayer ,我收集它分别播放每个视频,因此不支持剪辑之间的连续擦洗) .

当我将其中一个资产直接粘贴到 AVPlayerItem 并播放(没有 AVMutableComposition )时,它会快速缓冲 . 但是使用 AVMutableComposition ,视频在第二个剪辑上开始出现非常糟糕的情况(我的测试用例有6个剪辑,每个剪辑大约6秒),而音频一直在继续 . 在它播放一次之后,如果我倒回到开头它会非常顺利地播放,所以我认为问题在于缓冲 .

我目前试图解决这个问题的尝试让人感到费解,因为这似乎是 AVPlayer 的一个相当基本的用例 - 我希望有很多想法 .

这是设置 AVMutableComposition 的主要代码:

// Build an AVAsset for each of the source URIs
- (void)prepareAssetsForSources:(NSArray *)sources
{
  NSMutableArray *assets = [[NSMutableArray alloc] init]; // the assets to be used in the AVMutableComposition
  NSMutableArray *offsets = [[NSMutableArray alloc] init]; // for tracking buffering progress
  CMTime currentOffset = kCMTimeZero;
  for (NSDictionary* source in sources) {
    bool isNetwork = [RCTConvert BOOL:[source objectForKey:@"isNetwork"]];
    bool isAsset = [RCTConvert BOOL:[source objectForKey:@"isAsset"]];
    NSString *uri = [source objectForKey:@"uri"];
    NSString *type = [source objectForKey:@"type"];

    NSURL *url = isNetwork ?
      [NSURL URLWithString:uri] :
      [[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:uri ofType:type]];

    AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
    currentOffset = CMTimeAdd(currentOffset, asset.duration);

    [assets addObject:asset];
    [offsets addObject:[NSNumber numberWithFloat:CMTimeGetSeconds(currentOffset)]];
  }
  _clipAssets = assets;
  _clipEndOffsets = offsets;
}

// Called with _clipAssets
- (AVPlayerItem*)playerItemForAssets:(NSMutableArray *)assets
{
  AVMutableComposition* composition = [AVMutableComposition composition];
  for (AVAsset* asset in assets) {
    CMTimeRange editRange = CMTimeRangeMake(CMTimeMake(0, 600), asset.duration);
    NSError *editError;

    [composition insertTimeRange:editRange
                         ofAsset:asset
                          atTime:composition.duration
                           error:&editError];
  }
  AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
  return playerItem; // this is used to initialize the main player
}

我最初的想法是:由于它使用香草 AVPlayerItem 快速缓冲,为什么不保持一个单独的缓冲播放器依次加载每个资产(没有 AVMutableComposition )来缓冲主要播放器的资产?

- (void)startBufferingClips
{
  _bufferingPlayerItem = [AVPlayerItem playerItemWithAsset:_clipAssets[0] 
                              automaticallyLoadedAssetKeys:@[@"tracks"]];
  _bufferingPlayer = [AVPlayer playerWithPlayerItem:_bufferingPlayerItem];
  _currentlyBufferingIndex = 0;
}

// called every 250 msecs via an addPeriodicTimeObserverForInterval on the main player
- (void)updateBufferingProgress
{
  // If the playable (loaded) range is within 100 milliseconds of the clip
  // currently being buffered, load the next clip into the buffering player.
  float playableDuration = [[self calculateBufferedDuration] floatValue];
  CMTime totalDurationTime = [self playerItemDuration :_bufferingPlayer];
  Float64 totalDurationSeconds = CMTimeGetSeconds(totalDurationTime);
  bool bufferingComplete = totalDurationSeconds - playableDuration < 0.1;
  float bufferedSeconds = [self bufferedSeconds :playableDuration];

  float playerTimeSeconds = CMTimeGetSeconds([_player currentTime]);
  __block NSUInteger playingClipIndex = 0;

  // find the index of _player's currently playing clip
  [_clipEndOffsets enumerateObjectsUsingBlock:^(id offset, NSUInteger idx, BOOL *stop) {
    if (playerTimeSeconds < [offset floatValue]) {
      playingClipIndex = idx;
      *stop = YES;
    }
  }];

  // TODO: if  bufferedSeconds - playerTimeSeconds <= 0, pause the main player

  if (bufferingComplete && _currentlyBufferingIndex < [_clipAssets count] - 1) {
    // We're done buffering this clip, load the buffering player with the next asset
    _currentlyBufferingIndex += 1;
    _bufferingPlayerItem = [AVPlayerItem playerItemWithAsset:_clipAssets[_currentlyBufferingIndex]
                                automaticallyLoadedAssetKeys:@[@"tracks"]];
    _bufferingPlayer = [AVPlayer playerWithPlayerItem:_bufferingPlayerItem];
  }
}

- (float)bufferedSeconds:(float)playableDuration {
  __block float seconds = 0.0; // total duration of clips already buffered
  if (_currentlyBufferingIndex > 0) {
    [_clipEndOffsets enumerateObjectsUsingBlock:^(id offset, NSUInteger idx, BOOL *stop) {
      if (idx + 1 >= _currentlyBufferingIndex) {
        seconds = [offset floatValue];
        *stop = YES;
      }
    }];
  }
  return seconds + playableDuration;
}

- (NSNumber *)calculateBufferedDuration {
  AVPlayerItem *video = _bufferingPlayer.currentItem;
  if (video.status == AVPlayerItemStatusReadyToPlay) {
    __block float longestPlayableRangeSeconds;
    [video.loadedTimeRanges enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
      CMTimeRange timeRange = [obj CMTimeRangeValue];
      float seconds = CMTimeGetSeconds(CMTimeRangeGetEnd(timeRange));
      if (seconds > 0.1) {
        if (!longestPlayableRangeSeconds) {
          longestPlayableRangeSeconds = seconds;
        } else if (seconds > longestPlayableRangeSeconds) {
          longestPlayableRangeSeconds = seconds;
        }
      }
    }];
    Float64 playableDuration = longestPlayableRangeSeconds;
    if (playableDuration && playableDuration > 0) {
      return [NSNumber numberWithFloat:longestPlayableRangeSeconds];
    }
  }
  return [NSNumber numberWithInteger:0];
}

它最初看起来像一个魅力,但后来我切换到另一组测试剪辑,然后缓冲再次非常缓慢(缓冲播放器帮助,但还不够) . 似乎加载到缓冲播放器中的资产的 loadedTimeRangeAVMutableComposition 中的相同资产的 loadedTimeRange 不匹配:即使在加载到缓冲播放器中的每个项目的 loadedTimeRange s表示整个资产已经存在缓冲,主要播放器的视频继续口吃(而音频无缝播放到最后) . 同样,一旦主要播放器通过所有剪辑一次,重绕后回放是无缝的 .

我希望这个问题的答案,无论它是什么,都将成为其他iOS开发人员试图实现这个基本用例的起点 . 谢谢!

Edit: Since I posted this question, I made the following workaround for this. Hopefully this will save whoever runs into this some headache.

我最终做的是保持两个缓冲播放器(两个 AVPlayer )开始缓冲前两个剪辑,然后移动到最低索引的无缓冲剪辑,之后 loadedTimeRanges 指示缓冲当前剪辑已完成 . 我根据当前缓冲的剪辑和缓冲播放器的 loadedTimeRanges 进行了逻辑暂停/取消暂停播放,加上小幅度 . 这需要一些记账变量,但并不太复杂 .

这就是缓冲播放器的初始化方式(我在这里省略了簿记逻辑):

- (void)startBufferingClips
{
  _bufferingPlayerItemA = [AVPlayerItem playerItemWithAsset:_clipAssets[0] 
                               automaticallyLoadedAssetKeys:@[@"tracks"]];
  _bufferingPlayerA = [AVPlayer playerWithPlayerItem:_bufferingPlayerItemA];
  _currentlyBufferingIndexA = [NSNumber numberWithInt:0];
  if ([_clipAssets count] > 1) {
    _bufferingPlayerItemB = [AVPlayerItem playerItemWithAsset:_clipAssets[1] 
                                 automaticallyLoadedAssetKeys:@[@"tracks"]];
    _bufferingPlayerB = [AVPlayer playerWithPlayerItem:_bufferingPlayerItemB];
    _currentlyBufferingIndexB = [NSNumber numberWithInt:1];
    _nextIndexToBuffer = [NSNumber numberWithInt:2];
  } else {
    _nextIndexToBuffer = [NSNumber numberWithInt:1];
  }
}

此外,我需要确保视频和音频轨道没有被合并,因为它们被添加到 AVMutableComposition ,因为这显然干扰了缓冲(可能他们没有收到新数据) . 这是从 NSAsset 数组构建 AVMutableComposition 的代码:

- (AVPlayerItem*)playerItemForAssets:(NSMutableArray *)assets
{
  AVMutableComposition* composition = [AVMutableComposition composition];
  AVMutableCompositionTrack *compVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo
                                                                   preferredTrackID:kCMPersistentTrackID_Invalid];
  AVMutableCompositionTrack *compAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
                                                                   preferredTrackID:kCMPersistentTrackID_Invalid];
  CMTime timeOffset = kCMTimeZero;
  for (AVAsset* asset in assets) {
    CMTimeRange editRange = CMTimeRangeMake(CMTimeMake(0, 600), asset.duration);
    NSError *editError;

    NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
    NSArray *audioTracks = [asset tracksWithMediaType:AVMediaTypeAudio];

    if ([videoTracks count] > 0) {
    AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
      [compVideoTrack insertTimeRange:editRange
                              ofTrack:videoTrack
                               atTime:timeOffset
                                error:&editError];
    }

    if ([audioTracks count] > 0) {
      AVAssetTrack *audioTrack = [audioTracks objectAtIndex:0];
      [compAudioTrack insertTimeRange:editRange
                              ofTrack:audioTrack
                               atTime:timeOffset
                                error:&editError];
    }

    if ([videoTracks count] > 0 || [audioTracks count] > 0) {
      timeOffset = CMTimeAdd(timeOffset, asset.duration);
    }
  }
  AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
  return playerItem;
}

使用这种方法,使用 AVMutableComposition 为主播放器缓冲工作很好,速度很快,至少在我的设置中 .