How can I record a conversation / phone call on iO

2019-01-01 14:30发布

问题:

Is it theoretically possible to record a phone call on iPhone?

I\'m accepting answers which:

  • may or may not require the phone to be jailbroken
  • may or may not pass apple\'s guidelines due to use of private API\'s (I don\'t care; it is not for the App Store)
  • may or may not use private SDKs

I don\'t want answers just bluntly saying \"Apple does not allow that\". I know there would be no official way of doing it, and certainly not for an App Store application, and I know there are call recording apps which place outgoing calls through their own servers.

回答1:

Here you go. Complete working example. Tweak should be loaded in mediaserverd daemon. It will record every phone call in /var/mobile/Media/DCIM/result.m4a. Audio file has two channels. Left is microphone, right is speaker. On iPhone 4S call is recorded only when the speaker is turned on. On iPhone 5, 5C and 5S call is recorded either way. There might be small hiccups when switching to/from speaker but recording will continue.

#import <AudioToolbox/AudioToolbox.h>
#import <libkern/OSAtomic.h>

//CoreTelephony.framework
extern \"C\" CFStringRef const kCTCallStatusChangeNotification;
extern \"C\" CFStringRef const kCTCallStatus;
extern \"C\" id CTTelephonyCenterGetDefault();
extern \"C\" void CTTelephonyCenterAddObserver(id ct, void* observer, CFNotificationCallback callBack, CFStringRef name, void *object, CFNotificationSuspensionBehavior sb);
extern \"C\" int CTGetCurrentCallCount();
enum
{
    kCTCallStatusActive = 1,
    kCTCallStatusHeld = 2,
    kCTCallStatusOutgoing = 3,
    kCTCallStatusIncoming = 4,
    kCTCallStatusHanged = 5
};

NSString* kMicFilePath = @\"/var/mobile/Media/DCIM/mic.caf\";
NSString* kSpeakerFilePath = @\"/var/mobile/Media/DCIM/speaker.caf\";
NSString* kResultFilePath = @\"/var/mobile/Media/DCIM/result.m4a\";

OSSpinLock phoneCallIsActiveLock = 0;
OSSpinLock speakerLock = 0;
OSSpinLock micLock = 0;

ExtAudioFileRef micFile = NULL;
ExtAudioFileRef speakerFile = NULL;

BOOL phoneCallIsActive = NO;

void Convert()
{
    //File URLs
    CFURLRef micUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kMicFilePath, kCFURLPOSIXPathStyle, false);
    CFURLRef speakerUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kSpeakerFilePath, kCFURLPOSIXPathStyle, false);
    CFURLRef mixUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kResultFilePath, kCFURLPOSIXPathStyle, false);

    ExtAudioFileRef micFile = NULL;
    ExtAudioFileRef speakerFile = NULL;
    ExtAudioFileRef mixFile = NULL;

    //Opening input files (speaker and mic)
    ExtAudioFileOpenURL(micUrl, &micFile);
    ExtAudioFileOpenURL(speakerUrl, &speakerFile);

    //Reading input file audio format (mono LPCM)
    AudioStreamBasicDescription inputFormat, outputFormat;
    UInt32 descSize = sizeof(inputFormat);
    ExtAudioFileGetProperty(micFile, kExtAudioFileProperty_FileDataFormat, &descSize, &inputFormat);
    int sampleSize = inputFormat.mBytesPerFrame;

    //Filling input stream format for output file (stereo LPCM)
    FillOutASBDForLPCM(inputFormat, inputFormat.mSampleRate, 2, inputFormat.mBitsPerChannel, inputFormat.mBitsPerChannel, true, false, false);

    //Filling output file audio format (AAC)
    memset(&outputFormat, 0, sizeof(outputFormat));
    outputFormat.mFormatID = kAudioFormatMPEG4AAC;
    outputFormat.mSampleRate = 8000;
    outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
    outputFormat.mChannelsPerFrame = 2;

    //Opening output file
    ExtAudioFileCreateWithURL(mixUrl, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &mixFile);
    ExtAudioFileSetProperty(mixFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inputFormat), &inputFormat);

    //Freeing URLs
    CFRelease(micUrl);
    CFRelease(speakerUrl);
    CFRelease(mixUrl);

    //Setting up audio buffers
    int bufferSizeInSamples = 64 * 1024;

    AudioBufferList micBuffer;
    micBuffer.mNumberBuffers = 1;
    micBuffer.mBuffers[0].mNumberChannels = 1;
    micBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
    micBuffer.mBuffers[0].mData = malloc(micBuffer.mBuffers[0].mDataByteSize);

    AudioBufferList speakerBuffer;
    speakerBuffer.mNumberBuffers = 1;
    speakerBuffer.mBuffers[0].mNumberChannels = 1;
    speakerBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
    speakerBuffer.mBuffers[0].mData = malloc(speakerBuffer.mBuffers[0].mDataByteSize);

    AudioBufferList mixBuffer;
    mixBuffer.mNumberBuffers = 1;
    mixBuffer.mBuffers[0].mNumberChannels = 2;
    mixBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples * 2;
    mixBuffer.mBuffers[0].mData = malloc(mixBuffer.mBuffers[0].mDataByteSize);

    //Converting
    while (true)
    {
        //Reading data from input files
        UInt32 framesToRead = bufferSizeInSamples;
        ExtAudioFileRead(micFile, &framesToRead, &micBuffer);
        ExtAudioFileRead(speakerFile, &framesToRead, &speakerBuffer);
        if (framesToRead == 0)
        {
            break;
        }

        //Building interleaved stereo buffer - left channel is mic, right - speaker
        for (int i = 0; i < framesToRead; i++)
        {
            memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2, (char*)micBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
            memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2 + sampleSize, (char*)speakerBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
        }

        //Writing to output file - LPCM will be converted to AAC
        ExtAudioFileWrite(mixFile, framesToRead, &mixBuffer);
    }

    //Closing files
    ExtAudioFileDispose(micFile);
    ExtAudioFileDispose(speakerFile);
    ExtAudioFileDispose(mixFile);

    //Freeing audio buffers
    free(micBuffer.mBuffers[0].mData);
    free(speakerBuffer.mBuffers[0].mData);
    free(mixBuffer.mBuffers[0].mData);
}

void Cleanup()
{
    [[NSFileManager defaultManager] removeItemAtPath:kMicFilePath error:NULL];
    [[NSFileManager defaultManager] removeItemAtPath:kSpeakerFilePath error:NULL];
}

void CoreTelephonyNotificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
{
    NSDictionary* data = (NSDictionary*)userInfo;

    if ([(NSString*)name isEqualToString:(NSString*)kCTCallStatusChangeNotification])
    {
        int currentCallStatus = [data[(NSString*)kCTCallStatus] integerValue];

        if (currentCallStatus == kCTCallStatusActive)
        {
            OSSpinLockLock(&phoneCallIsActiveLock);
            phoneCallIsActive = YES;
            OSSpinLockUnlock(&phoneCallIsActiveLock);
        }
        else if (currentCallStatus == kCTCallStatusHanged)
        {
            if (CTGetCurrentCallCount() > 0)
            {
                return;
            }

            OSSpinLockLock(&phoneCallIsActiveLock);
            phoneCallIsActive = NO;
            OSSpinLockUnlock(&phoneCallIsActiveLock);

            //Closing mic file
            OSSpinLockLock(&micLock);
            if (micFile != NULL)
            {
                ExtAudioFileDispose(micFile);
            }
            micFile = NULL;
            OSSpinLockUnlock(&micLock);

            //Closing speaker file
            OSSpinLockLock(&speakerLock);
            if (speakerFile != NULL)
            {
                ExtAudioFileDispose(speakerFile);
            }
            speakerFile = NULL;
            OSSpinLockUnlock(&speakerLock);

            Convert();
            Cleanup();
        }
    }
}

OSStatus(*AudioUnitProcess_orig)(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData);
OSStatus AudioUnitProcess_hook(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData)
{
    OSSpinLockLock(&phoneCallIsActiveLock);
    if (phoneCallIsActive == NO)
    {
        OSSpinLockUnlock(&phoneCallIsActiveLock);
        return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
    }
    OSSpinLockUnlock(&phoneCallIsActiveLock);

    ExtAudioFileRef* currentFile = NULL;
    OSSpinLock* currentLock = NULL;

    AudioComponentDescription unitDescription = {0};
    AudioComponentGetDescription(AudioComponentInstanceGetComponent(unit), &unitDescription);
    //\'agcc\', \'mbdp\' - iPhone 4S, iPhone 5
    //\'agc2\', \'vrq2\' - iPhone 5C, iPhone 5S
    if (unitDescription.componentSubType == \'agcc\' || unitDescription.componentSubType == \'agc2\')
    {
        currentFile = &micFile;
        currentLock = &micLock;
    }
    else if (unitDescription.componentSubType == \'mbdp\' || unitDescription.componentSubType == \'vrq2\')
    {
        currentFile = &speakerFile;
        currentLock = &speakerLock;
    }

    if (currentFile != NULL)
    {
        OSSpinLockLock(currentLock);

        //Opening file
        if (*currentFile == NULL)
        {
            //Obtaining input audio format
            AudioStreamBasicDescription desc;
            UInt32 descSize = sizeof(desc);
            AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, &descSize);

            //Opening audio file
            CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)((currentFile == &micFile) ? kMicFilePath : kSpeakerFilePath), kCFURLPOSIXPathStyle, false);
            ExtAudioFileRef audioFile = NULL;
            OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile);
            if (result != 0)
            {
                *currentFile = NULL;
            }
            else
            {
                *currentFile = audioFile;

                //Writing audio format
                ExtAudioFileSetProperty(*currentFile, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc);
            }
            CFRelease(url);
        }
        else
        {
            //Writing audio buffer
            ExtAudioFileWrite(*currentFile, inNumberFrames, ioData);
        }

        OSSpinLockUnlock(currentLock);
    }

    return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
}

__attribute__((constructor))
static void initialize()
{
    CTTelephonyCenterAddObserver(CTTelephonyCenterGetDefault(), NULL, CoreTelephonyNotificationCallback, NULL, NULL, CFNotificationSuspensionBehaviorHold);

    MSHookFunction(AudioUnitProcess, AudioUnitProcess_hook, &AudioUnitProcess_orig);
}

A few words about what\'s going on. AudioUnitProcess function is used for processing audio streams in order to apply some effects, mix, convert etc. We are hooking AudioUnitProcess in order to access phone call\'s audio streams. While phone call is active these streams are being processed in various ways.

We are listening for CoreTelephony notifications in order to get phone call status changes. When we receive audio samples we need to determine where they come from - microphone or speaker. This is done using componentSubType field in AudioComponentDescription structure. Now, you might think, why don\'t we store AudioUnit objects so that we don\'t need to check componentSubType every time. I did that but it will break everything when you switch speaker on/off on iPhone 5 because AudioUnit objects will change, they are recreated. So, now we open audio files (one for microphone and one for speaker) and write samples in them, simple as that. When phone call ends we will receive appropriate CoreTelephony notification and close the files. We have two separate files with audio from microphone and speaker that we need to merge. This is what void Convert() is for. It\'s pretty simple if you know the API. I don\'t think I need to explain it, comments are enough.

About locks. There are many threads in mediaserverd. Audio processing and CoreTelephony notifications are on different threads so we need some kind synchronization. I chose spin locks because they are fast and because the chance of lock contention is small in our case. On iPhone 4S and even iPhone 5 all the work in AudioUnitProcess should be done as fast as possible otherwise you will hear hiccups from device speaker which obviously not good.



回答2:

Yes. Audio Recorder by a developer named Limneos does that (and quite well). You can find it on Cydia. It can record any type of call on iPhone 5 and up without using any servers etc\'. The call will be placed on the device in an Audio file. It also supports iPhone 4S but for speaker only.

This tweak is known to be the first tweak ever that managed to record both streams of audio without using any 3rd party severs, VOIP or something similar.

The developer placed beeps on the other side of the call to alert the person you are recording but those were removed too by hackers across the net. To answer your question, Yes, it\'s very much possible, and not just theoretically.

\"enter

Further reading

  • https://stackoverflow.com/a/19413363/202451
  • http://forums.macrumors.com/showthread.php?t=1566350
  • https://github.com/nst/iOS-Runtime-Headers


回答3:

The only solution I can think of is to use the Core Telephony framework, and more specifically the callEventHandler property, to intercept when a call is coming in, and then to use an AVAudioRecorder to record the voice of the person with the phone (and maybe a little of the person on the other line\'s voice). This is obviously not perfect, and would only work if your application is in the foreground at the time of the call, but it may be the best you can get. See more about finding out if there is an incoming phone call here: Can we fire an event when ever there is Incoming and Outgoing call in iphone?.

EDIT:

.h:

#import <AVFoundation/AVFoundation.h>
#import<CoreTelephony/CTCallCenter.h>
#import<CoreTelephony/CTCall.h>
@property (strong, nonatomic) AVAudioRecorder *audioRecorder;

ViewDidLoad:

NSArray *dirPaths;
NSString *docsDir;

dirPaths = NSSearchPathForDirectoriesInDomains(
    NSDocumentDirectory, NSUserDomainMask, YES);
docsDir = dirPaths[0];

NSString *soundFilePath = [docsDir
   stringByAppendingPathComponent:@\"sound.caf\"];

NSURL *soundFileURL = [NSURL fileURLWithPath:soundFilePath];

NSDictionary *recordSettings = [NSDictionary
        dictionaryWithObjectsAndKeys:
        [NSNumber numberWithInt:AVAudioQualityMin],
        AVEncoderAudioQualityKey,
        [NSNumber numberWithInt:16],
        AVEncoderBitRateKey,
        [NSNumber numberWithInt: 2],
        AVNumberOfChannelsKey,
        [NSNumber numberWithFloat:44100.0],
        AVSampleRateKey,
        nil];

NSError *error = nil;

_audioRecorder = [[AVAudioRecorder alloc]
              initWithURL:soundFileURL
              settings:recordSettings
              error:&error];

 if (error)
 {
       NSLog(@\"error: %@\", [error localizedDescription]);
 } else {
       [_audioRecorder prepareToRecord];
 }

CTCallCenter *callCenter = [[CTCallCenter alloc] init];

[callCenter setCallEventHandler:^(CTCall *call) {
  if ([[call callState] isEqual:CTCallStateConnected]) {
    [_audioRecorder record];
  } else if ([[call callState] isEqual:CTCallStateDisconnected]) {
    [_audioRecorder stop];
  }
}];

AppDelegate.m:

- (void)applicationDidEnterBackground:(UIApplication *)application//Makes sure that the recording keeps happening even when app is in the background, though only can go for 10 minutes.
{
    __block UIBackgroundTaskIdentifier task = 0;
    task=[application beginBackgroundTaskWithExpirationHandler:^{
    NSLog(@\"Expiration handler called %f\",[application backgroundTimeRemaining]);
    [application endBackgroundTask:task];
    task=UIBackgroundTaskInvalid;
}];

This is the first time using many of these features, so not sure if this is exactly right, but I think you get the idea. Untested, as I do not have access to the right tools at the moment. Compiled using these sources:

  • Recording voice in background using AVAudioRecorder

  • http://prassan-warrior.blogspot.com/2012/11/recording-audio-on-iphone-with.html

  • Can we fire an event when ever there is Incoming and Outgoing call in iphone?



回答4:

I guess some hardware could solve this. Connected to the minijack-port; having earbuds and a microphone passing through a small recorder. This recorder can be very simple. While not in conversation the recorder could feed the phone with data/the recording (through the jack-cable). And with a simple start button (just like the volum controls on the earbuds) could be sufficient for timing the recording.

Some setups

  • http://www.danmccomb.com/posts/483/how-to-record-iphone-conversations-using-zoom-h4n/
  • http://forums.macrumors.com/showthread.php?t=346430


回答5:

Apple does not allow it and does not provide any API for it.

However, on a jailbroken device I\'m sure it\'s possible. As a matter of fact, I think it\'s already done. I remember seeing an app when my phone was jailbroken that changed your voice and recorded the call - I remember it was a US company offering it only in the states. Unfortunately I don\'t remember the name...