Symbian : Getting contacts from SIM on s60 3rd edition - symbian

Here is the code that counts number of contacts stored on SIM card. When I compile it, I get error showing that lib for usage of RBasicGsmPhone should be included. I googled a lott and found that gsmbas.lib is needed,but there is no such lib file in mmp file suggestions. What to do?? Someone pls help
TInt SimCntCount = 0;
/*this code is just to get the TSY name*/
CCommsDatabase* db = CCommsDatabase::NewL(EDatabaseTypeUnspecified);
CleanupStack::PushL(db);
CCommsDbTableView* table = db->OpenTableLC(TPtrC(MODEM));
table->GotoFirstRecord();
table->ReadTextL(TPtrC(MODEM_TSY_NAME),iTsyName);
// Cleanup - CommsDB no longer needed
CleanupStack::PopAndDestroy(2); // table,db
// Connect to the ETel server
RTelServer aTelServer;
User::LeaveIfError(aTelServer.Connect());
CleanupClosePushL(aTelServer);
User::LeaveIfError(aTelServer.LoadPhoneModule(iTsyName));
TInt numberOfPhones;
User::LeaveIfError(aTelServer.EnumeratePhones(numberOfPhones));
SimCntCount = 0;
for (TInt i=numberOfPhones; i>0; i--) {
// Get the phone name
RTelServer::TPhoneInfo phoneInfo;
User::LeaveIfError(aTelServer.GetPhoneInfo(i-1,phoneInfo));
// Open the phone by name
RBasicGsmPhone phone;
User::LeaveIfError(phone.Open(aTelServer,phoneInfo.iName));
TInt phoneBookCount;
phone.EnumeratePhoneBooks(phoneBookCount);
RBasicGsmPhone::TPhoneBookInfo aPbInfo;
for(TInt j=0;j<phoneBookCount;j++){
phone.GetPhoneBookInfo(j,aPbInfo);
SimCntCount += aPbInfo.iUsed;
}
phone.Close();
}
CleanupStack::PopAndDestroy(1);

Some libraries are actually not public and must be obtained from Nokia. You can try to contact Nokia's Symbian support about this topic - if they still have any interes in it.

Related

I'm testing Agora.Io SDK in Unity3D, I cannot find how to select the input microphone GetAudioRecordingDeviceCount() is not working

I'm trying to use Agora SDK, with Windows Build, but when I use the method:
IAudioRecordingDeviceManager recordingManager = mRtcEngine.GetAudioRecordingDeviceManager();
if (recordingManager == null) {
Debug.LogError("recordingManager is null!");
return;
}
int devices = recordingManager.GetAudioRecordingDeviceCount();
The result is -10000000
Then if I call for example:
int val = recordingManager.GetAudioRecordingDevice(index, ref name, ref deviceId);
I got val == -2
and name and deviceId are empty.
How can I enumerate the microphones and select one?
In order to solve that, you need to call CreateAAudioRecordingDeviceManager(), then you can call GetAudioRecordingDeviceCount()
(IF YOU ARE NOT USING IAudioRecordingDeviceManager, but AudioRecordingDeviceManager instead, looks like you need to call SetEngine() after instantiating AudioRecordingDeviceManager)
In my case, I am using both for audio and video like the following code below (AND IT WORKS FINE XD)
//audio
IAudioRecordingDeviceManager audioDevManager = mRtcEngine.GetAudioRecordingDeviceManager();
audioDevManager.CreateAAudioRecordingDeviceManager();
int audioDevCount = audioDevManager.GetAudioRecordingDeviceCount();
Debug.Log($"AUDIO DEVICE COUNT: {audioDevCount}");
//video
IVideoDeviceManager videoDevManager = mRtcEngine.GetVideoDeviceManager();
videoDevManager.CreateAVideoDeviceManager();
int deviceCount = videoDevManager.GetVideoDeviceCount();
Debug.Log($"VIDEO DEVICE COUNT: {deviceCount}");
p.s. mRtcEngine is a local variable that stores the return from IRtcEngine.GetEngine()

How to retrieve vCard in xmpp , which delegate method and flow is use for get vCard in cocoa os x app?

I have implement xmpp framework in my cocoa os x application. currently am working on vCard. i have done work on set all required field's data of login user and it stored successfully, but i have no proper solution for how to get login user's stored vcard and am not aware of it. Please give me solution for this problem. i have been suffering for this from last 3 days. help me as soon as
Thanx in advance
i have used below code to set vCard field
dispatch_queue_t queue = dispatch_queue_create("queue", DISPATCH_QUEUE_PRIORITY_DEFAULT);
dispatch_async(queue, ^
{
XMPPvCardCoreDataStorage *xmppvCardStorage = [XMPPvCardCoreDataStorage sharedInstance];
XMPPvCardTempModule *xmppvCardTempModule = [[XMPPvCardTempModule alloc] initWithvCardStorage:xmppvCardStorage];
[xmppvCardTempModule activate:[self xmppStream]];
XMPPvCardTemp *myvCardTemp = [xmppvCardTempModule myvCardTemp];
if (!myvCardTemp)
{
NSXMLElement *vCardXML = [NSXMLElement elementWithName:#"vCard" xmlns:#"vcard-temp"];
XMPPvCardTemp *newvCardTemp = [XMPPvCardTemp vCardTempFromElement:vCardXML];
[newvCardTemp setName:#"vCard"];
[newvCardTemp setNickname:lbl.stringValue];
[newvCardTemp setFormattedName:#"abc"];
[newvCardTemp setDesc:lbl_abt.stringValue];
[xmppvCardTempModule updateMyvCardTemp:newvCardTemp];
}
else
{
[myvCardTemp setName:#"vCard"];
[myvCardTemp setNickname:lbl.stringValue];
[myvCardTemp setFormattedName:#"abc"];
[myvCardTemp setDesc:lbl_abt.stringValue];
[xmppvCardTempModule updateMyvCardTemp:myvCardTemp];
}
});
}
And i also tried below code to retrieve vCard
/* XMPPvCardTempModule *xmppvCardTempModule;
XMPPvCardTemp *vCard =[xmppvCardTempModule vCardTempForJID:[XMPPJID jidWithString:#"xxxx"] shouldFetch:YES];
NSLog(#"V CARD :%#",vCard.nickname);*/
XMPPvCardCoreDataStorage* xmppvCardStorage = [XMPPvCardCoreDataStorage sharedInstance];
XMPPvCardTempModule* m = [[XMPPvCardTempModule alloc]initWithvCardStorage:xmppvCardStorage];
[m fetchvCardTempForJID:[XMPPJID jidWithString:#"xxxx"]ignoreStorage:YES];
NSLog(#"%#",xmppvCardStorage.description);
Please suggest me proper way to solve this problem :

Offline rendering with The Amazing Audio Engine

This post is also posted on The Amazing Audio Engine forum.
Hi everyone, I am new to The Amazing Audio Engine and iOS dev, and have been trying to figure out how to get the BPM of a track.
So far I have found two articles on offline rendering on the forum:
http://forum.theamazingaudioengine.com/discussion/comment/1743/#Comment_1743
http://forum.theamazingaudioengine.com/discussion/comment/649#Comment_649
As far as I know the AEAudioControllerRenderMainOutput function is only correctly implemented in this fork.
I am trying to do offline rendering to process a track and then use the algorithm described here (JavaScript) and implemented here.
So far I'm loading this fork, and I am using Swift (I am part of Make School Summer Academy at the moment, which teaches Swift).
When playing a track this code works for me (No offline rendering!)
let file = NSBundle.mainBundle().URLForResource("track", withExtension:
"m4a")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleavedFloatStereoAudioDescription())
let receiver = AEBlockAudioReceiver { (source, time, frames, audioBufferList) -> Void in
let leftSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData)
// Advance the buffer sizeof(float) * 512
let rightSamples = UnsafeMutablePointer<Float>(audioBufferList[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples) rightSamples: \(rightSamples)")
}
audioController.addChannels([channel])
audioController.addOutputReceiver(receiver)
audioController.start()
Trying offline rendering
This is the code I am trying to run while I am using this fork
audioController = AEAudioController(audioDescription: AEAudioController.nonInterleaved16BitStereoAudioDescription())
let file = NSBundle.mainBundle().URLForResource("track", withExtension: "mp3")
let channel: AnyObject! = AEAudioFilePlayer.audioFilePlayerWithURL(file, audioController: audioController, error: nil)
audioController.addChannels([channel])
audioController.start(nil)
audioController.stop()
var t = AudioTimeStamp()
let bufferLength: UInt32 = 4096
var buffer = AEAllocateAndInitAudioBufferList(audioController.audioDescription, Int32(bufferLength))
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer)
var renderDuration: NSTimeInterval = channel.duration
var sampleRate: Float64 = audioController.audioDescription.mSampleRate
var lengthInFrames: UInt32 = UInt32(renderDuration * sampleRate)
var songBuffer: [Float64]
t.mFlags = UInt32(kAudioTimeStampSampleTimeValid)
var frequencyAnalyzer = FrequencyAnalyzer()
println("renderDuration \(renderDuration)")
var outIsOpen = Boolean()
AUGraphClose(audioController.audioGraph)
AUGraphIsOpen(audioController.audioGraph, &outIsOpen)
println("AUGraphIsOpen: \(outIsOpen)")
for (var i: UInt32 = 0; i < lengthInFrames; i += bufferLength) {
AEAudioControllerRenderMainOutput(audioController, t, bufferLength, buffer);
t.mSampleTime += Float64(bufferLength)
println(t.mSampleTime)
let leftSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData)
let rightSamples = UnsafeMutablePointer<Int16>(buffer[0].mBuffers.mData) + 512
println("leftSamples: \(leftSamples.memory) rightSamples: \(rightSamples.memory)")
}
AEFreeAudioBufferList(buffer)
AUGraphOpen(audioController.audioGraph)
audioController.start(nil)
audioController.stop()
Offline rendering is not working for me ATM. The second example is not working it's getting me a lot of mixed errors which I don't understand.
A very common one is inside the channelAudioProducer function on this line:
// Tell mixer/mixer's converter unit to render into audio status = AudioUnitRender(group->converterUnit ? group->converterUnit : group->mixerAudioUnit, arg->ioActionFlags, &arg->originalTimeStamp, 0, *frames, audio);
It gives me EXC_BAD_ACCESS (code=EXC_I386_GPFLT). Among other errors this one is very common.
I am sorry I am a total noob on this field but some stuff I don't really understand. Should I use nonInterleaved16BitStereoAudioDescription or nonInterleavedFloatStereoAudioDescription? How does this implement the mData?
I would love to get some help on this since I'm kind of lost at the moment. Please when you answer me try to explain it as fully as you can, I am new to this stuff.
NOTE: Posting code in Objective-C is fine if you don't know Swift.

FFMpeg Out of sync audio/video in iOS application

The app saves the camera output into a mov. file, then turn it to flv format that sent by AVPacket to rtmp server.
It switch every time between two files, one is written by the camera output and the other one is sent.
My problem is that the audio/video is getting out of sync after a while.
The first buffer sent is allways 100% sync but after awhile it get messed.
I belive its a DTS-PTS problem..
if(isVideo)
{
packet->stream_index = VIDEO_STREAM;
packet->dts = packet->pts = videoPosition;
videoPosition += packet->duration = FLV_TIMEBASE * packet->duration * videoCodec->ticks_per_frame * videoCodec->time_base.num / videoCodec->time_base.den;
}
else
{
packet->stream_index = AUDIO_STREAM;
packet->dts = packet->pts = audioPosition;
audioPosition += packet->duration = FLV_TIMEBASE * packet->duration / audioRate;
//NSLog(#"audio position = %lld", audioPosition);
}
packet->pos = -1;
packet->convergence_duration = AV_NOPTS_VALUE;
// This sometimes fails without being a critical error, so no exception is raised
if((code = av_interleaved_write_frame(file, packet)))
{
NSLog(#"Streamer::Couldn't write frame");
}
av_free_packet(packet);
You can research this sample: http://unick-soft.ru/art/files/ffmpegEncoder-vs2008.zip
But this sample is for Windows.
In this sample I use pts only for audio stream:
if (pVideoCodec->coded_frame->pts != AV_NOPTS_VALUE)
{
pkt.pts = av_rescale_q(pVideoCodec->coded_frame->pts,
pVideoCodec->time_base, pVideoStream->time_base);
}
I was experiencing a similar issue when switching out the AVAssetWriters, and noticed that it went way if I only started using the new AVAssetWriter when I got a video sample
https://medium.com/#brandon.kobel/ios-seamless-video-chunks-4383a5a3a874

VFW_E_NOT_IN_GRAPH in directshow's video-capture

Me again, trying to use directShow.
I tried to implement an example from a camera-distributor to read the camera (I would like to get frames in Form of an Byte-Array) and am getting a VFW_E_NOT_IN_GRAPH-Error when trying to connect the pins.
I already searched and now know, that that means, I didn't add a specific Filter to the graphbuilder (or the filter I used isn't compatible?), but I added a Filter and can't see any differences to the sample... however, the sample isn't a project, but only code-scraps, so I think I may have forgotten any initialization...
Could you please take a look at this and tell me whether you find an error?
Everything works find without error, just the last hr is filled with 0x8004025F and nothing happens (I made a stop-point within DoRenderSample-method):
HRESULT hr = S_OK;
IBaseFilter* pFilter=0;
hr=CreateKernelFilter(
CLSID_VideoInputDeviceCategory,
L"Videology USB-C Camera",
&pFilter
);
CoInitialize(NULL);
// CComQIPtr<IVideology20K1XX> pKs(pFilter);
CComQIPtr<IVideologyCommon> pKs( pFilter );
if(pFilter==0)return;
// hr=pKs->SetWhiteBalanceMode(wbAuto);
CBitmapRenderer *m_pSnapshotter = new CBitmapRenderer( _T("Bitmap renderer"), NULL, &hr );
if( FAILED(hr) )
{
ASSERT("Couldn't create bitmap renderer.");
return;
}
m_pSnapshotter->SetCallback( (IBitmapCallback*) this );
CComQIPtr< IBaseFilter > pGrabBase( m_pSnapshotter );
IGraphBuilder* m_pGraphBuilder=0;
hr = CoCreateInstance(CLSID_FilterGraph, NULL,
CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **)&m_pGraphBuilder);
hr = m_pGraphBuilder->AddFilter( pGrabBase, L"Snapshot" );
CComPtr<IPin> pOutPin;
hr= pFilter->FindPin( L"1", &pOutPin );
CBasePin* pInPin = m_pSnapshotter->GetPin( 0 );
hr = m_pGraphBuilder->Connect( pOutPin, pInPin );
I hope I didn't forget any important informations...
(Using C++-Builder from embarcadero XE2 16 and DirectShow9 from 2005 I think)
The error code tells you what is wrong. VFW_E_NOT_IN_GRAPH, something is not in graph. You connect two pins, which belong to two filters. One of the filters is not in graph. As you add pGrabBase a few lines above, then the other filter is not in the graph. Add it as well prior to connecting pins.