-
It is not a rare case that my personal app or the sample is crashing. Sometimes it takes a while and sometimes it is crashing in a few seconds. The error pattern is mostly always the same. The camera stream hangs and than the view is crashing. below is one kind of a stacktrace, I have more if needed.
|
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 13 replies
-
It makes me fill like something I have already seen. My quick first guess is a too high FrameRate in the Renderer. Have a look at RealityCore - Renderer override fun doFrame(frameTimeNanos: Long) {
choreographer.postFrameCallback(this)
// limit to max fps
val nanoTime = System.nanoTime()
val tick = nanoTime / (TimeUnit.SECONDS.toNanos(1) / maxFramesPerSecond)
if (lastTick / frameRate.factor == tick / frameRate.factor) {
return
}
lastTick = tick
// render using frame from last tick to reduce possibility of jitter but increases latency
if (// only render if we have an ar frame
timestamp != 0L &&
uiHelper.isReadyToRender &&
// This means you are sending frames too quickly to the GPU
renderer.beginFrame(swapChain!!, frameTimeNanos)
) {
renderer.render(view)
renderer.endFrame()
}
synchronized(mirrors) {
mirrors.iterator().forEach { mirror ->
if (mirror.surface == null) {
if (mirror.swapChain != null) {
engine.destroySwapChain(mirror.swapChain!!)
}
mirrors.remove(mirror)
} else if (mirror.swapChain == null) {
mirror.swapChain = engine.createSwapChain(mirror.surface!!)
}
}
}
val frame = arSession.update()
// During startup the camera system may not produce actual images immediately. In
// this common case, a frame with timestamp = 0 will be returned.
if (frame.timestamp != 0L &&
frame.timestamp != timestamp
) {
timestamp = frame.timestamp
doFrame(frame)
}
} ...and it's a good thing to read to Kotlin before wrtiing it. |
Beta Was this translation helpful? Give feedback.
-
Maybe the ARCore SharedCamera sample can be useful. |
Beta Was this translation helpful? Give feedback.
-
I created an issue on the ARCore repo: google-ar/arcore-android-sdk#1262 |
Beta Was this translation helpful? Give feedback.
-
@RGregat Do you confirm that your crash is happening from a cleaned master latest version gtlf sample clone? |
Beta Was this translation helpful? Give feedback.
-
Why is it actually important to configure the UpdateMode to LATEST_CAMERA_IMAGE and not BLOCKING. Sceneform is here not very concrete. if (updateMode != Config.UpdateMode.LATEST_CAMERA_IMAGE) {
throw new RuntimeException(
"Invalid ARCore UpdateMode "
+ updateMode
+ ", Sceneform requires that the ARCore session is configured to the "
+ "UpdateMode LATEST_CAMERA_IMAGE.");
} In the ARCore Java helloar sample i found in // Obtain the current frame from ARSession. When the configuration is set to
// UpdateMode.BLOCKING (it is by default), this will throttle the rendering to the
// camera framerate. So this // limit to max fps
long nanoTime = System.nanoTime();
long tick = nanoTime / (TimeUnit.SECONDS.toNanos(1) / MAX_FRAMES_PER_SECONDS);
if(lastTick / frameRate.factor() == tick /frameRate.factor()) {
//Log.d("SceneView", "Skip frame");
return;
}
lastTick = tick; is more or less a workaround if the UpdateMode is set to LATEST_CAMERA_IMAGE? I removed the UpdateMode check and setted the UpdateMode to Blocking and the gltf sample app started normally. But there must be a reason why the sceneform devs wanted to stick to LATEST_CAMERA_IMAGE |
Beta Was this translation helpful? Give feedback.
-
I guess I havn't found my main issue, but at least I found something. In This is the function body public void initializeTexture(Frame frame) {
if (isTextureInitialized()) {
return;
}
// External Camera Texture
Camera arCamera = frame.getCamera();
CameraIntrinsics intrinsics = arCamera.getTextureIntrinsics();
int[] dimensions = intrinsics.getImageDimensions();
cameraTexture = new ExternalTexture(
cameraTextureId,
dimensions[0],
dimensions[1]);
if (depthOcclusionMode == DepthOcclusionMode.DEPTH_OCCLUSION_ENABLED && (
depthMode == DepthMode.DEPTH ||
depthMode == DepthMode.RAW_DEPTH)) {
if (occlusionCameraMaterial != null) {
isTextureInitialized = true;
setOcclusionMaterial(occlusionCameraMaterial);
initOrUpdateRenderableMaterial(occlusionCameraMaterial);
}
} else {
if (cameraMaterial != null) {
isTextureInitialized = true;
setCameraMaterial(cameraMaterial);
initOrUpdateRenderableMaterial(cameraMaterial);
}
}
} The creation of I wrapped now this part Camera arCamera = frame.getCamera();
CameraIntrinsics intrinsics = arCamera.getTextureIntrinsics();
int[] dimensions = intrinsics.getImageDimensions();
cameraTexture = new ExternalTexture(
cameraTextureId,
dimensions[0],
dimensions[1]); with an if statement to check if cameraTexture is null or not. Well as I said, this is a minor thing and maybe not the root cause of my Filament Crashs, but maybe one piece. I create a PR as soon as possible. |
Beta Was this translation helpful? Give feedback.
I guess I havn't found my main issue, but at least I found something.
In
ArSceneView::onBeginFrame
the functioncameraStream.initializeTexture(currentFrame);
is called twice.This is the function body