Hello,
I am trying to create this app with Swift in Xcode that lets the user take a photo with their camera, and then the app sends that photo using http to a server I have set up.
Don’t worry about privacy concerns I am not planning on distributing this app anywhere, it is only meant for myself.
When you take a photo using the app the user has to either confirm by using Use Photo
or retake the photo using Retake Photo
.
My question is, How can we get rid of that confirmation functionality so that the image taken gets saved immediately and can get sent without any option to choose? Is this possible with the standard camera api? I know there’s a parameter called didFinishPickingMediaWithInfo that might play a roll in this, and because of that maybe I have to use some 3rd party library to get this to work?
(Most of the code has been written by chatGPT and I don’t understand every single part of the code just yet.)
import SwiftUI
struct ContentView: View {
@State private var isImagePickerPresented = false
@State private var capturedImage: UIImage?
@State private var isCameraAuthorized = false
var body: some View {
VStack {
Button("take another photo") { // This is only if the user wants to take another photo
isImagePickerPresented.toggle()
}
.sheet(isPresented: $isImagePickerPresented, onDismiss: {
}) {
ImagePicker(image: $capturedImage, onImageCapture: { imageData in
if let imageData = imageData {
uploadImage(imageData: imageData)
}
})
}
.onAppear {
// Display the camera immediately when the view appears
isImagePickerPresented.toggle()
}
// Manually send the image to the server. The app should send it automatically after the user confirms.
Button(action: {
if let imageData = capturedImage?.jpegData(compressionQuality: 0.5) {
uploadImage(imageData: imageData)
}
}) {
VStack {
Text("Cool")
.font(.system(size: 70))
.padding(69)
.overlay(
RoundedRectangle(cornerRadius: 15)
.stroke(Color.black, lineWidth: 8)
)
}
.contentShape(Rectangle())
}
.background(Color.red)
.foregroundColor(.white)
.cornerRadius(15)
}
.background(
Image("gigachad") // This is just the image for the background of the app
.scaledToFit()
.edgesIgnoringSafeArea(.all)
)
}
// Here's main
@main
struct BacknForthApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
}
//
// ImagePicker.swift
// BacknForth
//
import SwiftUI
struct ImagePicker: UIViewControllerRepresentable {
@Binding var image: UIImage?
var onImageCapture: ((Data?) -> Void)
@Environment(\.presentationMode) var presentationMode
func makeCoordinator() -> Coordinator {
return Coordinator(parent: self)
}
func makeUIViewController(context: Context) -> UIImagePickerController {
let picker = UIImagePickerController()
picker.delegate = context.coordinator
picker.sourceType = .camera
picker.allowsEditing = false
return picker
}
func updateUIViewController(_ uiViewController: UIImagePickerController, context: Context) {}
class Coordinator: NSObject, UINavigationControllerDelegate, UIImagePickerControllerDelegate {
var parent: ImagePicker
init(parent: ImagePicker) {
self.parent = parent
}
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey: Any]) {
if let uiImage = info[.originalImage] as? UIImage {
parent.image = uiImage
let imageData = uiImage.jpegData(compressionQuality: 0.5)
parent.onImageCapture(imageData)
if let imageData = imageData {
parent.onImageCapture(imageData)
}
}
parent.presentationMode.wrappedValue.dismiss() // Dismiss the ImagePicker
}
}
}
I’ve mostly just been going back and worth with chatGPT to try and solve this. I think it has something to do with how the api works. The imagePickerController function needs a parameter called didFinishPickingMediawithInfo before the whole thing can work.
To get this to work I might have to use a different library or api because I think I would otherwise have to customize how the library in of itself works, which I don’t know if that’s possible.