Save AcquireCameraImageBytes() from Unity ARCore t

2020-02-06 09:02发布

Using unity, and the new 1.1 version of ARCore, the API exposes some new ways of getting the camera information. However, I can't find any good examples of saving this as a file to local storage as a jpg, for example.

The ARCore examples have a nice example of retrieving the camera data and then doing something with it here: https://github.com/google-ar/arcore-unity-sdk/blob/master/Assets/GoogleARCore/Examples/ComputerVision/Scripts/ComputerVisionController.cs#L212 and there are a few examples of retrieving the camera data in that class, but nothing around saving that data.

I've seen this: How to take & save picture / screenshot using Unity ARCore SDK? which uses the older API way of getting data, and doesn't really go into detail on saving, either.

What I ideally want is a way to turn the data from Frame.CameraImage.AcquireCameraImageBytes() in the API into a stored jpg on disk, through Unity.

Update

I've since got it working mainly through digging through this issue on the ARCore github page: https://github.com/google-ar/arcore-unity-sdk/issues/72#issuecomment-355134812 and modifying Sonny's answer below, so it's only fair that one gets accepted.

In case anyone else is trying to do this I had to do the following steps:

  1. Add a callback to the Start method to run your OnImageAvailable method when the image is available:

    public void Start()
    {
        TextureReaderComponent.OnImageAvailableCallback += OnImageAvailable;
    }
    
  2. Add a TextureReader (from the computer vision example provided with the SDK) to your camera and your script

  3. Your OnImageAvailable should look a bit like this:

    /// <summary>
    /// Handles a new CPU image.
    /// </summary>
    /// <param name="format">The format of the image.</param>
    /// <param name="width">Width of the image, in pixels.</param>
    /// <param name="height">Height of the image, in pixels.</param>
    /// <param name="pixelBuffer">Pointer to raw image buffer.</param>
    /// <param name="bufferSize">The size of the image buffer, in bytes.</param>
    private void OnImageAvailable(TextureReaderApi.ImageFormatType format, int width, int height, IntPtr pixelBuffer, int bufferSize)
    {
        if (m_TextureToRender == null || m_EdgeImage == null || m_ImageWidth != width || m_ImageHeight != height)
        {
            m_TextureToRender = new Texture2D(width, height, TextureFormat.RGBA32, false, false);
            m_EdgeImage = new byte[width * height * 4];
            m_ImageWidth = width;
            m_ImageHeight = height;
        }
    
        System.Runtime.InteropServices.Marshal.Copy(pixelBuffer, m_EdgeImage, 0, bufferSize);
    
        // Update the rendering texture with the sampled image.
        m_TextureToRender.LoadRawTextureData(m_EdgeImage);
        m_TextureToRender.Apply();
    
        var encodedJpg = m_TextureToRender.EncodeToJPG();
        var path = Application.persistentDataPath;
    
        File.WriteAllBytes(path + "/test.jpg", encodedJpg);
    }
    

2条回答
祖国的老花朵
2楼-- · 2020-02-06 09:25

Since I'm not familiar with ARCore, I shall keep this generic.

  1. Load the byte array into your Texture2D using LoadRawTextureData() and Apply()
  2. Encode the texture using EncodeToJPG()
  3. Save the encoded data with File.WriteAllBytes(path + ".jpg", encodedBytes)
查看更多
够拽才男人
3楼-- · 2020-02-06 09:38

In Unity, it should be possible to load the raw image data into a texture and then save it to a JPG using UnityEngine.ImageConversion.EncodeToJPG. Example code:

public class Example : MonoBehaviour
{
    private Texture2D _texture;
    private TextureFormat _format = TextureFormat.RGBA32;

    private void Awake()
    {
        _texture = new Texture2D(width, height, _format, false);
    }

    private void Update()
    {
        using (var image = Frame.CameraImage.AcquireCameraImageBytes())
        {
            if (!image.IsAvailable) return;

            // Load the data into a texture 
            // (this is an expensive call, but it may be possible to optimize...)
            _texture.LoadRawTextureData(image);
            _texture.Apply();
        }
    }

    public void SaveImage()
    {
        var encodedJpg = _texture.EncodeToJPG();
        File.WriteAllBytes("test.jpg", encodedJpg)
    }
}

However, I'm not sure if the TextureFormat corresponds to a format that works with Frame.CameraImage.AcquireCameraImageBytes(). (I'm familiar with Unity but not ARCore.) See Unity's documentation on TextureFormat, and whether that is compatible with ARCore's ImageFormatType.

Also, test whether the code is performant enough for your application.

EDIT: As user @Lece explains, save the encoded data with File.WriteAllBytes. I've updated my code example above as I omitted that step originally.

EDIT #2: For the complete answer specific to ARCore, see the update to the question post. The comments here may also be useful - Jordan specified that "the main part was to use the texture reader from the computer vision sdk example here".

查看更多
登录 后发表回答