I'm developing a skill to play my own music.
I have troubles with AudioPlayer with the developer console test environment ( https://developer.amazon.com/alexa/console/ask/test/ ). Two questions:
1. AudioPlayer Built-in Intents doesn't run as expected (test simulator bug?)
Alexa, play mymusicsSkill
playing Wanderer track 1: Antelao Adrift
Alexa, next
What would you like to play next?
Ask mymusicSkill next
intent: next
See also screenshot:
BTW, my code:
'AMAZON.NextIntent': function () {
this.response.speak('intent: next')
this.emit(':responseReady')
},
So the expected behaviour, also following this documentation: https://developer.amazon.com/docs/custom-skills/audioplayer-interface-reference.html
is that Alexa, next would fire the AMAZON.NextIntent of my skill, but it doesn't. That's a bug, in my opinion.
2. AudioPlayer not (yet) supported on test simulator?
I know that audio streaming is not supported on the console test simulator (See also my previous question: https://forums.developer.amazon.com/questions/143902/when-test-simulator-will-support-audioplayer-direc.html ). Because I can't own a physical Echo (I'm from Italy and Amazon deny to sell my an echo to an italian address ), there is any way to test my skill using audio streams?
NOTE/SUGGESTION to improve the web test simulator:
even if there are problems to stream audio to the web test simulator, a workaround could be to visualize a message info, showing on the GUI that streaming is active (the play is active, etc.)...
Any idea?
Thanks
giorgio
Related
I'm trying to troubleshoot an issue on watchOS.
I'm not sure how to reproduce the problem I'm seeing, but I do encounter it occasionally during testing on a real device in the wild, so I'm trying to use os_log in order to diagnose the problem after the fact.
As a first step, to make sure I understand how how to write to the log and access it later, I've attempted to log an event any time the app first loads.
In the ExtensionDelegate.swift file for my app, I added this:
import os.log
extension OSLog {
private static var subsystem = Bundle.main.bundleIdentifier!
static let health = OSLog(subsystem: subsystem,
category: "health")
}
Then, I updated the applicationDidBecomeActive delegate function with this:
func applicationDidBecomeActive() {
os_log("App Started",
log: OSLog.health,
type: .error)
}
I know it's not really an error message, but from what I've read, messages that are not .error are not written to saved to the log for later. I want to make sure it gets written to the log like a real error would.
I installed the sysdiagnose profile, then installed the the most recent version of my app.
After testing the app for the day, I attempted to export the file. Following the instructions I've found elsewhere, I produced a sysdiagnose on Apple Watch by holding the Digital Crown and Side button for two seconds (and felt the haptic feedback when I released).
Then, I put the watch on the charger for a few minutes per the instructions here, which recommended 15 minutes.
I opened the Watch app on my paired iPhone, then went to General > Diagnostic Logs and downloaded the sysdiagnose from Apple Watch, and sent it to my computer with AirDrop.
This gave me a tarball file (for example, sysdiagnose_2021.03.05_17-01-57-0700_Watch-OS_Watch_18S801.tar.gz). Once I decompressed that, I had a folder of lots of files and subfolders.
After poking around in this folder, I figured my best bet was to look in the system_logs.logarchive file. I opened that in the macOS Console app, set the Showing dropdown to All Messages, and looked around the time I opened the app. I didn't see any log output from my app.
I also filtered for "App Started" (the log message from my app) and didn't find anything.
Then, I filtered by category for "health" and didn't find the event I had logged.
Is system_logs.logarchive the correct place to be looking for the log output from my app?
If not, where should I be looking? Or what am I doing wrong?
I really want a better understanding of how I can log messages on Apple Watch so I can view them later so I can make my Apple Watch apps more robust, but I'm at a dead end.
Am I looking in the wrong place? Or am I setting up the logging wrong? Or is it something else? I would appreciate any guidance about this!
According to the Apple Dev Forms, sysdiagnose allows you to view the logs on your apple watch.
I need to implement an app that monitors inbound/outbound connections by different apps on iPhone. my app is going to run in background using apple's background multitasking feature for voip and navigators.
I can use private api as my client doesn't need this app on appstore.
Thanks.
I got through this.
I didn't need any private Api to get information of inbound/outbound active connections.
Its just you need to know basic C programming and some patience.
I wrote to apple dev forums regarding this,and got response as-
From my perspective most of what I have to say about this issue is covered in the post referenced below.
<https://devforums.apple.com/message/748272#748272>>
The way to make progress on this is to:
o grab the relevant headers from the Mac OS X SDK
o look at the Darwin source for netstat to see how the pieces fit together
WARNING: There are serious compatibility risks associated with shipping an app that uses this technique; it's fine to use this for debugging and so on, but I recommend against shipping code like this to end users.
What I did step by step is -
1)downloaded code of netstat from BSD opensource -
2)add this to your new iphone project.
3)see,some header files are not present in ios sdk so you need take it copied from opensource.apple.com and add those in your iphone sdk at relevant path under-
/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS6.0.sdk/usr/include
My xcode version is 4.5.2. so this is path relevent to my xcode. you can have different path according to versions of xcodes.Anyway. and remember to add those headers in both iosSdk & iOSSimulatorSdk both so that code will work on device as well as on simulator.
4)you may find some minor errors in netstat code relating not finding definitions of some structures in header files.e.g " struct xunpcb64 " .dont wory. definitions are present there.you need to comment some "#if !TARGET_OS_EMBEDDED" #else in those header files so that ios sdk can reach in those if condition and access the definition.(need some try and error.be patient.)
5)finally you will be abe to compile your code.Cheers!!
In case you haven't seen this already, I have used it successfully on my iPhone.
https://developer.apple.com/library/ios/ipad/#qa/qa1176/_index.html
I realize it is not exactly what you want.
Generally, I agree with Suneet. All you need is network sniffer.
You can try to do partial port of WireShark (it's open source and it works on MacOS) to iOS. Both iOS and OS X share most part of the kernel, so if it's not explicitly prohibited on iOS, it should work for you.
WireShark is quite big product, so you may be intersted to look for another open source network sniffer which work on OS X.
I am performing a review on an iOS application for which I do not have the source code. In order to gain more control over the environment, I am running the application on a jailbroken iPad.
I'd like to be able to monitor the API calls that the application is making...ideally I'd like to find something like Rohitab's MS Windows based API Monitor, but instead for iOS.
I have done some research and found a project by KennyTM called "Subjective-C" that seems that it may do what I need. I actually have been using a cycript script, along with the libsubjc.dylib available on the Google code site.
However, I have been unable to figure out how to correctly get it to start logging calls for an app. Here's the link to the cycript script, written by the author of Subjective-C (libsubjc). I pasted the script below as well.
/*
libsubjc.cy ... Use libsubjc in cycript.
Copyright (C) 2009 KennyTM~ <kennytm#gmail.com>
[...GPL3...]
*/
dlopen("libsubjc.dylib", 10);
if (!dlfun) {
function dlfun(fn, encoding, altname) { var f = new Functor(dlsym(RTLD_DEFAULT, fn), encoding); if (f) this[altname || fn] = f; return f; }
}
dlfun("SubjC_start", "v");
dlfun("SubjC_end", "v");
dlfun("SubjC_set_file", "v^{sFILE=}");
dlfun("SubjC_set_maximum_depth", "vI");
dlfun("SubjC_set_print_arguments", "vB");
dlfun("SubjC_set_print_return_value", "vB");
dlfun("SubjC_set_print_timestamp", "vB");
SubjC_Deny = 0, SubjC_Allow = 1;
dlfun("SubjC_clear_filters", "v");
dlfun("SubjC_filter_method", "vi#:");
dlfun("SubjC_filter_class", "vi#");
dlfun("SubjC_filter_selector", "vi:");
dlfun("SubjC_default_filter_type", "vi");
dlfun("SubjC_filter_class_prefixes", "viI^*");
dlfun("SubjC_filter_class_prefix", "vi*");
dlfun("fopen", "^{sFILE=}**");
dlfun("fclose", "i^{sFILE=}");
I have been able to load the libsubjc cycript script, and call the SubjC_start function. However, how do I specify an input filehandle for the line starting with dlfun("SubjC_set_file", "v^{sFILE=}");
Has anyone successfully used the "libsubjc.cy" cycript script with the Subjective-C library (libsubjc.dylib) to monitor an app's API calls?
UPDATE
This is at least generating the output file, but I don't see any information populated within the output file (/tmp/test.txt).
cycript -p SpringBoard libsubjc.cy; cycript -p SpringBoard
f = fopen("/tmp/test.txt", "w");
SubjC_set_file(f);
SubjC_set_maximum_depth(15);
SubjC_set_print_arguments(YES);
SubjC_set_print_return_value(YES);
SubjC_set_print_timestamp(YES);
SubjC_default_filter_type(SubjC_Deny);
SubjC_start();
//do stuff
SubjC_end();
Or, if anyone knows of another way to monitor API calls (w/o source code) on a jailbroken device, please let me know!
I'm not aware of a direct equivalent to API Monitor. However, Frida is a popular tool for mobile app instrumentation, with a tutorial on iOS usage. Once installed, you can trace API calls with a command like frida-trace -U -i "CCCryptorCreate*" Twitter to trace calls from the Twitter app to functions starting with CCCryptorCreate.
Set up a proxy server on your computer to redirect and track all the API calls. This is a common way to peak into iOS web traffic and you don't need a jailbroken device.
Is there a way to check whether the user has enabled speech recognition (spoken commands) in System Preferences? (Mac OS X). If the user has it enabled, I would like to support additional speech commands. Unfortunately there isn't any method in NSSpeechRecognizer to check this and I can't seem to find any Carbon functions to check it either.
One of the problem is that the round Speech Commands window (the one with a microphone on it) seems to appear intermittently whenever I instantiate NSSpeechRecognizer. Also it often freezes my app for about half a second or so while the object is created (probably it's starting up the speech recognition service).
In essence, if the speech recognizer isn't already running and being used, I don't want to start it up. But if the user actively uses the speech recognizer, I would like to provide additional support for it.
Thanks in advance.
I don't know the public API either; but the round mic window is controlled by SpeakableItems.app, at least on OS X 10.6. You can check the process list and/or the running applications list to see if it's there.
As inspired by #Yuji's answer, looks like the only way is to check whether the speech recognition server is running or not. Here is the code snippet, in case anyone also need it.
+(BOOL) speakableItemsEnabled {
NSString* speechServerBundleName = #"com.apple.speech.recognitionserver";
NSArray* apps = [NSRunningApplication runningApplicationsWithBundleIdentifier:speechServerBundleName];
NSRunningApplication* speechServerApp = [apps lastObject];
return speechServerApp && !speechServerApp.terminated;
}
Hopefully this doesn't break in 10.7 "Lion".
Would anyone have an idea as to why an app would work on almost every phone that has 2.1 but not the Desire?
One of my apps uses voice input and the Desire is the only phone that force closes when the voice prompt comes up.
The worst part is that I don't know how to test this, I don't have one or know anyone who does.
Any ideas?
EDIT:
I finally found out that HTC disabled voice in the Desire and you have to do a work around to install it.
So if you are relying on voice input make sure you use the code in the google example to catch the error:
PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities(
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
if (activities.size() == 0) {
noResults.setText("Voice input not found on this phone.");
}else{
//If voice is enabled
}
I think the most important thing to do first is to get the exception report. Since you can't test it by yourself, I would use a tool to get the exception report from your customers. In Android 2.2 the built-in tool can be used. If you have other targeting SDKs I would recommend this services: http://code.google.com/p/android-remote-stacktrace/ to get a remote stacktrace.
Then if you post the stacktrace here, I think somebody will be able to help you!