ios – How to expand detectable area for double tap in SwiftUI?


I am working on implementing tap gestures in a dynamic VideoPlayer made with AVKit. I intend to have it be when a video is viewed in a feed (this is for a social media app), the video plays without sound. Tapping on the video once enables sound, tapping on the video twice makes it full screen.

Currently, the single tap works. However, the double tap isn’t detected unless I tap on the top right corner of the video.

import SwiftUI
import AVKit

struct VideoPlayerView: View {
    
    @StateObject private var viewModel: VideoPlayerViewModel
    
    init(url: URL, isFeedView: Bool = true) {
        _viewModel = StateObject(wrappedValue: .init(url: url, isFeedView: isFeedView))
    }
    
    var body: some View {
        ZStack {
            if let player: AVPlayer = viewModel.player {
                VideoPlayer(player: player)
                    .onAppear {
                        // Start playing or resume from the last known position if in feed view
                        if viewModel.isFeedView {
                            if let lastKnownTime = viewModel.lastKnownTime {
                                player.seek(to: CMTime(seconds: lastKnownTime, preferredTimescale: 600))
                            }
                            player.play()
                            player.volume = 0 // Set volume to 0 for feed view
                        }
                    }
                    .onDisappear {
                        // Pause the video and store the last known time
                        viewModel.lastKnownTime = player.currentTime().seconds
                        player.pause()
                    }
                    .contentShape(Rectangle())
                    .gesture(TapGesture(count: 2).onEnded {
                        print("Double tap detected")
                        viewModel.isFullScreen.toggle()
                    })
                    .simultaneousGesture(TapGesture().onEnded {
                        print("Single tap detected")
                        player.volume = 1 // Set volume to 1
                    })
            }
        }
        .maxSize()
        .fullScreenCover(isPresented: $viewModel.isFullScreen) {
            AVPlayerViewControllerRepresented(viewModel: viewModel)
        }
    }
}

class VideoPlayerViewModel: ObservableObject {
    @Published var player: AVPlayer?
    @Published var lastKnownTime: Double?
    @Published var isFullScreen: Bool = false
    @Published var isFeedView: Bool
    
    init(url: URL, isFeedView: Bool = true) {
        player = AVPlayer(url: url)
        lastKnownTime = nil
        self.isFeedView = isFeedView
        if isFeedView {
            registerForPlaybackEndNotification()
        }
    }
    
    private func registerForPlaybackEndNotification() {
        NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: player?.currentItem, queue: nil) { [weak self] _ in
            self?.videoDidFinish()
        }
    }
    
    private func videoDidFinish() {
        // Replay logic for feed view
        if isFeedView, let player = player {
            player.seek(to: .zero)
            player.play()
        }
    }
}

My current code for the gestures is based on this, but I want to be able to expand the detectable area such that when I double tap anywhere on the video, it goes to full screen. I read that .contentShape(Rectangle()) is supposed to do that but so far hasn’t worked. What am I missing?

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img