我正在使用 ARKit 和 SceneKit 进行透视测试。这个想法是在地面上显示平面 3D 模型时改进 3D 渲染。我已经打开了另一个几乎解决的透视问题的票。 (ARKit 透视渲染)
然而,在我的大量测试/ 3D 显示后,我注意到,有时当我锚定 3D 模型时,它的大小可能不同...(宽度和长度)我通常会显示一个 16 米长,1.5 米宽的 3D 模型。你可以想象这会扭曲我的渲染。我不知道为什么我的显示器在 3D 模型尺寸方面可能会有所不同。也许它来自跟踪和我的测试环境。
下面是我用来将 3D 模型添加到场景中的代码:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let imageAnchor = anchor as? ARImageAnchor else { return }
let referenceImage = imageAnchor.referenceImage
let imageAnchorPosition = imageAnchor.transform.columns.3
print("Image detected")
let modelName = "couloirV2"
//let modelName = "lamp"
guard let object = VirtualObject
.availableObjects
.filter({ $0.modelName == modelName })
.first else { fatalError("Cannot get model \(modelName)") }
print("Loading \(object)...")
self.sceneView.prepare([object], completionHandler: { _ in
self.updateQueue.async {
// Translate the object's position to the reference node position.
object.position.x = imageAnchorPosition.x
object.position.y = imageAnchorPosition.y
object.position.z = imageAnchorPosition.z
// Save the initial y value for slider handler function
self.tmpYPosition = object.position.y
// Match y node's orientation
object.orientation.y = node.orientation.y
print("Adding object to the scene")
// Prepare the object
object.load()
// Show origin axis
object.showObjectOrigin()
// Translate on z axis to match perfectly with image detected.
var translation = matrix_identity_float4x4
translation.columns.3.z += Float(referenceImage.physicalSize.height / 2)
object.simdTransform = matrix_multiply(object.simdTransform, translation)
self.sceneView.scene.rootNode.addChildNode(object)
self.virtualObjectInteraction.selectedObject = object
self.sceneView.addOrUpdateAnchor(for: object)
}
})
}