Android Wear audio

2019-07-22 17:42发布

问题:

I am fairly new to Android, but i have the knowledge of development and Java language. I inherited an app that needs to include now android wearable companion. I have managed to make simple app to be started, send message back and forth with the phone app.

The issue i am facing right now is how to pass the audio from the wearable microphone back to the app to be processed. I manage to make a timer 4 seconds to create the buffer of the audio, which i found on another question here, but now i am stuck how to pass this data back to the app, and how to tell the app to process this data and nothing else.

I was looking into the MessageApi, but it is too big for that to be sent with. I was thinking maybe ChannelApi but i havent been able to find any information how to use it.

Any suggestions will be helpful.

回答1:

Using ChannelApi is the right solution. You would need to open a channel from your wearable app to your phone app by a code similar to this:

Wearable.ChannelApi.openChannel(
        mGoogleApiClient, node.getId(), "/mypath").setResultCallback(
        new ResultCallback<ChannelApi.OpenChannelResult>() {
            @Override
            public void onResult(ChannelApi.OpenChannelResult openChannelResult) {
                if (openChannelResult.getStatus().isSuccess()) {
                    mChannel = openChannelResult.getChannel();
                    mChannel.getOutputStream(mGoogleApiClient).setResultCallback(

                            new ResultCallback<Channel.GetOutputStreamResult>() {
                                @Override
                                public void onResult(Channel.GetOutputStreamResult getOutputStreamResult) {
                                    if (getOutputStreamResult.getStatus().isSuccess()) {
                                        mOutputStream = getOutputStreamResult.getOutputStream();
                                    } else {
                                        // handle failure, and close channel
                                    }
                                }
                            });
                }
            }
        });

Then you have an OutputStream to write to on your wearble side. On the phone side, you listen for a channel opening by adding aChannelApi.ChannelListener() listener through Wearable.ChannelApi.addListener. That interface has a number of callbacks for you to use; the onChannelOpened(Channel channel) method informs you that a channel has opened and passes you a channel object, from which you can get an InputStream

channel.getInputStream(mGoogleApiClient).setResultCallback(
        new ResultCallback<Channel.GetInputStreamResult>() {
            @Override
            public void onResult(Channel.GetInputStreamResult getInputStreamResult) {
                if (getInputStreamResult.getStatus().isSuccess()) {
                    InputStream is = getInputStreamResult.getInputStream();
                } else {
                    // handle errors
                }
            }
        });

There is a number of other methods in that listener that are useful to inform you when the channel is closed, etc (see the JavaDocs). Now your wearable app can write to that channel and the phone app will receive the bytes at the other end of the channel.

Note. Since multiple "nodes" may be connected to each other, it is best if you use the CapabilityApis to identify the node that is capable of processing your audio file, i.e. your phone. In other words, your app on the phone side regsiters itself as being capable of processing your audio stream and then your wearable app searches among the connected nodes for those that provide that capability, to target for the channel that it is opening.