Send usb package osx - objective-c

I need send package to my connected USB device. I have javascript function and even find c# function, but i didn't find any obj-c or swift analogues:
chrome.usb.controlTransfer(self.handle, {
'direction':'in',
'recipient':'device',
'requestType': 'standard',
'request': 6,
'value': 0x300 | index,
'index': 0, // specifies language
'length': 255 // max length to retreive
}, function (result) {});
I did connection, so i have var device : IOHIDDevice device. But can't find function, how to send.

Oh, sorry, i found it on apple site https://developer.apple.com/reference/iokit/iousbdevrequest?language=objc

Related

I have a problem about YouTube API with ESP8266

I try to make youtube subscribe counter but it a problem with youtube api library here the error message
Arduino: 1.8.12 (Windows 10), Board: "NodeMCU 1.0 (ESP-12E Module), 80 MHz, Flash, Legacy (new can return nullptr), All SSL ciphers (most compatible), 4MB (FS:2MB OTA:~1019KB), 2, v2 Lower Memory, Disabled, None, Only Sketch, 115200"
The sketch name had to be modified.
Sketch names must start with a letter or number, followed by letters,
numbers, dashes, dots and underscores. Maximum length is 63 characters.
C:\Users\Um Sythat\Documents\Arduino\libraries\arduino-youtube-api-master\src\YoutubeApi.cpp:95:11: error: DynamicJsonBuffer is a class from ArduinoJson 5. Please see arduinojson.org/upgrade to learn how to upgrade your program to ArduinoJson version 6
DynamicJsonBuffer jsonBuffer;
^
C:\Users\Um Sythat\Documents\Arduino\libraries\arduino-youtube-api-master\src\YoutubeApi.cpp: In member function 'bool YoutubeApi::getChannelStatistics(String)':
C:\Users\Um Sythat\Documents\Arduino\libraries\arduino-youtube-api-master\src\YoutubeApi.cpp:95:20: error: 'jsonBuffer' was not declared in this scope
DynamicJsonBuffer jsonBuffer;
^
C:\Users\Um Sythat\Documents\Arduino\libraries\arduino-youtube-api-master\src\YoutubeApi.cpp:97:10: error: 'ArduinoJson::JsonObject' has no member named 'success'
if(root.success()) {
^
exit status 1
Error compiling for board NodeMCU 1.0 (ESP-12E Module).
Invalid library found in C:\Program Files (x86)\Arduino\libraries\libraries: no headers files (.h) found in C:\Program Files (x86)\Arduino\libraries\libraries
Invalid library found in C:\Program Files (x86)\Arduino\libraries\youtube_control_arduino: no headers files (.h) found in C:\Program Files (x86)\Arduino\libraries\youtube_control_arduino
Invalid library found in C:\Program Files (x86)\Arduino\libraries\libraries: no headers files (.h) found in C:\Program Files (x86)\Arduino\libraries\libraries
Invalid library found in C:\Program Files (x86)\Arduino\libraries\youtube_control_arduino: no headers files (.h) found in C:\Program Files (x86)\Arduino\libraries\youtube_control_arduino
This report would have more information with
"Show verbose output during compilation"
option enabled in File -> Preferences.
I already download youtube api library and arduino json library and import it to arduino ide I always get error from it i dont know why it gone like this someone who know please help me. I like to hear from you.
And here my code :
/*******************************************************************
* Read YouTube Channel statistics from the YouTube API *
* *
* By Brian Lough *
* https://www.youtube.com/channel/UCezJOfu7OtqGzd5xrP3q6WA *
*******************************************************************/
#include <YoutubeApi.h>
#include <ESP8266WiFi.h>
#include <WiFiClientSecure.h>
#include <ArduinoJson.h> // This Sketch doesn't technically need this, but the library does so it must be installed.
//------- Replace the following! ------
char ssid[] = "xxx"; // your network SSID (name)
char password[] = "yyyy"; // your network key
#define API_KEY "zzzz" // your google apps API Token
#define CHANNEL_ID "UCezJOfu7OtqGzd5xrP3q6WA" // makes up the url of channel
WiFiClientSecure client;
YoutubeApi api(API_KEY, client);
unsigned long api_mtbs = 60000; //mean time between api requests
unsigned long api_lasttime; //last time api request has been done
long subs = 0;
void setup() {
Serial.begin(115200);
// Set WiFi to station mode and disconnect from an AP if it was Previously
// connected
WiFi.mode(WIFI_STA);
WiFi.disconnect();
delay(100);
// Attempt to connect to Wifi network:
Serial.print("Connecting Wifi: ");
Serial.println(ssid);
WiFi.begin(ssid, password);
while (WiFi.status() != WL_CONNECTED) {
Serial.print(".");
delay(500);
}
Serial.println("");
Serial.println("WiFi connected");
Serial.println("IP address: ");
IPAddress ip = WiFi.localIP();
Serial.println(ip);
}
void loop() {
if (millis() - api_lasttime > api_mtbs) {
if(api.getChannelStatistics(CHANNEL_ID))
{
Serial.println("---------Stats---------");
Serial.print("Subscriber Count: ");
Serial.println(api.channelStats.subscriberCount);
Serial.print("View Count: ");
Serial.println(api.channelStats.viewCount);
Serial.print("Comment Count: ");
Serial.println(api.channelStats.commentCount);
Serial.print("Video Count: ");
Serial.println(api.channelStats.videoCount);
// Probably not needed :)
//Serial.print("hiddenSubscriberCount: ");
//Serial.println(api.channelStats.hiddenSubscriberCount);
Serial.println("------------------------");
}
api_lasttime = millis();
}
}
I would ditch the libraray - it uses
#include <ArduinoJson.h>
and the error tells us whats wrong
error: DynamicJsonBuffer is a class from ArduinoJson 5 <=== you have probably version 6.x.x installed
which is a memory hog. Often one single function is used by libraries and the rest is useless. To solve your problem, you have to
downgrade ArduinoJson.h to version 5.13.5
The other reason I do not like it, breaking changes in nearly all major releases (real big breaking). So another option you have (if skilled enough) replace the ArduinoJson functions with
a lighter JSON library
or replace it with self written JSON functions - often a simple buffer and some char handling does the trick
Read the issues and PRs on github to inform yourself about missing updates and other problems. Development stopped March 2018 since then no adaption to the (breaking) changes in the youtube API

Change OS X system volume programmatically

How can I change the volume programmatically from Objective-C?
I found this question, Controlling OS X volume in Snow Leopard which suggests to do:
Float32 volume = 0.5;
UInt32 size = sizeof(Float32);
AudioObjectPropertyAddress address = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
1 // Use values 1 and 2 here, 0 (master) does not seem to work
};
OSStatus err;
err = AudioObjectSetPropertyData(kAudioObjectSystemObject, &address, 0, NULL, size, &volume);
NSLog(#"status is %i", err);
This does nothing for me, and prints out status is 2003332927.
I also tried using values 2 and 0 in the address structure, same result for both.
How can I fix this and make it actually decrease the volume to 50%?
You need to get the default audio device first:
#import <CoreAudio/CoreAudio.h>
AudioObjectPropertyAddress getDefaultOutputDevicePropertyAddress = {
kAudioHardwarePropertyDefaultOutputDevice,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster
};
AudioDeviceID defaultOutputDeviceID;
UInt32 volumedataSize = sizeof(defaultOutputDeviceID);
OSStatus result = AudioObjectGetPropertyData(kAudioObjectSystemObject,
&getDefaultOutputDevicePropertyAddress,
0, NULL,
&volumedataSize, &defaultOutputDeviceID);
if(kAudioHardwareNoError != result)
{
// ... handle error ...
}
You can then set your volume on channel 1 (left) and channel 2 (right). Note that channel 0 (master) does not seem to be supported (the set command returns 'who?')
AudioObjectPropertyAddress volumePropertyAddress = {
kAudioDevicePropertyVolumeScalar,
kAudioDevicePropertyScopeOutput,
1 /*LEFT_CHANNEL*/
};
Float32 volume;
volumedataSize = sizeof(volume);
result = AudioObjectSetPropertyData(defaultOutputDeviceID,
&volumePropertyAddress,
0, NULL,
sizeof(volume), &volume);
if (result != kAudioHardwareNoError) {
// ... handle error ...
}
Hope this answers your question!
I ran the HALLab utility that comes with the developer tools (i.e. Audio Tools for Xcode). That allows you to open an info window for individual devices and that window has a tab showing notifications. When I change my system volume, I do indeed see that the kAudioDevicePropertyVolumeScalar property changes for each channel of the output device as Thomas O'Dell's answer suggests. However, I also see the property kAudioHardwareServiceDeviceProperty_VirtualMasterVolume change on the master channel. That seems much more promising since you don't have to manually set it for all channels and maintain the balance across them.
You would use the function AudioHardwareServiceSetPropertyData() from Audio Hardware Services to set that on the default output device. To be safe, you might first check that it's settable using AudioHardwareServiceIsPropertySettable().
The documentation for that property says:
kAudioHardwareServiceDeviceProperty_VirtualMasterVolume
A Float32 value that represents the value of the volume control.
The range for this property’s value is 0.0 (silence) through 1.0 (full level). The effect of this property depends on the hardware device associated with the HAL audio object. If the device has a master volume control, this property controls it. If the device has individual channel volume controls, this property applies to those identified by the device's preferred multichannel layout, or the preferred stereo pair if the device is stereo only. This control maintains relative balance between the channels it affects.
You could run a bash script that will change the master volume. This prevents setting the audio first to one side:
Muted:
execlp("osascript", "osascript", "-e", "set volume output muted true", NULL);
Change volume (scale 0-10):
execlp("osascript", "osascript", "-e", "set volume 5", NULL);

How do you set the input level (gain) on the built-in input (OSX Core Audio / Audio Unit)?

I've got an OSX app that records audio data using an Audio Unit. The Audio Unit's input can be set to any available source with inputs, including the built-in input. The problem is, the audio that I get from the built-in input is often clipped, whereas in a program such as Audacity (or even Quicktime) I can turn down the input level and I don't get clipping.
Multiplying the sample frames by a fraction, of course, doesn't work, because I get a lower volume, but the samples themselves are still clipped at time of input.
How do I set the input level or gain for that built-in input to avoid the clipping problem?
This works for me to set the input volume on my MacBook Pro (2011 model). It is a bit funky, I had to try setting the master channel volume, then each independent stereo channel volume until I found one that worked. Look through the comments in my code, I suspect the best way to tell if your code is working is to find a get/set-property combination that works, then do something like get/set (something else)/get to verify that your setter is working.
Oh, and I'll point out of course that I wouldn't rely on the values in address staying the same across getProperty calls as I'm doing here. It seems to work but it's definitely bad practice to rely on struct values being the same when you pass one by reference to a function. This is of course sample code so please forgive my laziness. ;)
//
// main.c
// testInputVolumeSetter
//
#include <CoreFoundation/CoreFoundation.h>
#include <CoreAudio/CoreAudio.h>
OSStatus setDefaultInputDeviceVolume( Float32 toVolume );
int main(int argc, const char * argv[]) {
OSStatus err;
// Load the Sound system preference, select a default
// input device, set its volume to max. Now set
// breakpoints at each of these lines. As you step over
// them you'll see the input volume change in the Sound
// preference panel.
//
// On my MacBook Pro setting the channel[ 1 ] volume
// on the default microphone input device seems to do
// the trick. channel[ 0 ] reports that it works but
// seems to have no effect and the master channel is
// unsettable.
//
// I do not know how to tell which one will work so
// probably the best thing to do is write your code
// to call getProperty after you call setProperty to
// determine which channel(s) work.
err = setDefaultInputDeviceVolume( 0.0 );
err = setDefaultInputDeviceVolume( 0.5 );
err = setDefaultInputDeviceVolume( 1.0 );
}
// 0.0 == no volume, 1.0 == max volume
OSStatus setDefaultInputDeviceVolume( Float32 toVolume ) {
AudioObjectPropertyAddress address;
AudioDeviceID deviceID;
OSStatus err;
UInt32 size;
UInt32 channels[ 2 ];
Float32 volume;
// get the default input device id
address.mSelector = kAudioHardwarePropertyDefaultInputDevice;
address.mScope = kAudioObjectPropertyScopeGlobal;
address.mElement = kAudioObjectPropertyElementMaster;
size = sizeof(deviceID);
err = AudioObjectGetPropertyData( kAudioObjectSystemObject, &address, 0, nil, &size, &deviceID );
// get the input device stereo channels
if ( ! err ) {
address.mSelector = kAudioDevicePropertyPreferredChannelsForStereo;
address.mScope = kAudioDevicePropertyScopeInput;
address.mElement = kAudioObjectPropertyElementWildcard;
size = sizeof(channels);
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &channels );
}
// run some tests to see what channels might respond to volume changes
if ( ! err ) {
Boolean hasProperty;
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
// On my MacBook Pro using the default microphone input:
address.mElement = kAudioObjectPropertyElementMaster;
// returns false, no VolumeScalar property for the master channel
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 0 ];
// returns true, channel 0 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
address.mElement = channels[ 1 ];
// returns true, channel 1 has a VolumeScalar property
hasProperty = AudioObjectHasProperty( deviceID, &address );
}
// try to get the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 0 ];
// returns noErr, but says the volume is always zero (weird)
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
size = sizeof(volume);
address.mElement = channels[ 1 ];
// returns noErr, but returns the correct volume!
err = AudioObjectGetPropertyData( deviceID, &address, 0, nil, &size, &volume );
}
// try to set the input volume
if ( ! err ) {
address.mSelector = kAudioDevicePropertyVolumeScalar;
address.mScope = kAudioDevicePropertyScopeInput;
size = sizeof(volume);
if ( toVolume < 0.0 ) volume = 0.0;
else if ( toVolume > 1.0 ) volume = 1.0;
else volume = toVolume;
address.mElement = kAudioObjectPropertyElementMaster;
// returns an error which we expect since it reported not having the property
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 0 ];
// returns noErr, but doesn't affect my input gain
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
address.mElement = channels[ 1 ];
// success! correctly sets the input device volume.
err = AudioObjectSetPropertyData( deviceID, &address, 0, nil, size, &volume );
}
return err;
}
EDIT in response to your question, "How'd [I] figure this out?"
I've spent a lot of time using Apple's audio code over the last five or so years and I've developed some intuition/process when it comes to where and how to look for solutions. My business partner and I co-wrote the original iHeartRadio apps for the first-generation iPhone and a few other devices and one of my responsibilities on that project was the audio portion, specifically writing an AAC Shoutcast stream decoder/player for iOS. There weren't any docs or open-source examples at the time so it involved a lot of trial-and-error and I learned a ton.
At any rate, when I read your question and saw the bounty I figured this was just low-hanging fruit (i.e. you hadn't RTFM ;-). I wrote a few lines of code to set the volume property and when that didn't work I genuinely got interested.
Process-wise maybe you'll find this useful:
Once I knew it wasn't a straightforward answer I started thinking about how to solve the problem. I knew the Sound System Preference lets you set the input gain so I started by disassembling it with otool to see whether Apple was making use of old or new Audio Toolbox routines (new as it happens):
Try using:
otool -tV /System/Library/PreferencePanes/Sound.prefPane/Contents/MacOS/Sound | bbedit
then search for Audio to see what methods are called (if you don't have bbedit, which every Mac developer should IMO, dump it to a file and open in some other text editor).
I'm most familiar with the older, deprecated Audio Toolbox routines (three years to obsolescence in this industry) so I looked at some Technotes from Apple. They have one that shows how to get the default input device and set its volume using the newest CoreAudio methods but as you undoubtedly saw their code doesn't work properly (at least on my MBP).
Once I got to that point I fell back on the tried-and-true: Start googling for keywords that were likely to be involved (e.g. AudioObjectSetPropertyData, kAudioDevicePropertyVolumeScalar, etc.) looking for example usage.
One interesting thing I've found about CoreAudio and using the Apple Toolbox in general is that there is a lot of open-source code out there where people try various things (tons of pastebins and GoogleCode projects etc.). If you're willing to dig through a bunch of this code you'll typically either find the answer outright or get some very good ideas.
In my search the most relevant things I found were the Apple technote showing how to get the default input device and set the master input gain using the new Toolbox routines (even though it didn't work on my hardware), and I found some code that showed setting the gain by channel on an output device. Since input devices can be multichannel I figured this was the next logical thing to try.
Your question is really good because at least right now there is no correct documentation from Apple that shows how to do what you asked. It's goofy too because both channels report that they set the volume but clearly only one of them does (the input mic is a mono source so this isn't surprising, but I consider having a no-op channel and no documentation about it a bit of a bug on Apple).
This happens pretty consistently when you start dealing with Apple's cutting-edge technologies. You can do amazing things with their toolbox and it blows every other OS I've worked on out of the water but it doesn't take too long to get ahead of their documentation, particularly if you're trying to do anything moderately sophisticated.
If you ever decide to write a kernel driver for example you'll find the documentation on IOKit to be woefully inadequate. Ultimately you've got to get online and dig through source code, either other people's projects or the OS X source or both, and pretty soon you'll conclude as I have that the source is really the best place for answers (even though StackOverflow is pretty awesome).
Thanks for the points and good luck with your project :)

PhoneGap/Cordova iOS: capture video with a duration limit (ie. 30 seconds)

I would like to limit video capturing to 30 seconds. As of now the PhoneGap documentation says the following of iOS implementation:
"The duration parameter is not supported. Recording lengths cannot be limited programmatically."
I did find this post which seems to give the solution for a purely objective C implementation:
iPhone: 5 seconds video capture
The question is: Is this something that could "easily" be made into a phonegap plugin or is there some other reason phonegap hasn't been able to implement this? If you think it can be done - any information pointing me in the right direction is much appreciated! Thanks :)
I'm trying to solve the same problem and may have a solution:
The capture.captureVideo() function returns an array of MediaFile objects. Those objects have a MediaFile.getFormatData() method that tells you what the duration of the file is and therefore you could reject the file if its too long...
Here's my solution:
navigator.device.capture.captureVideo(function(mediaFiles) {
mediaFiles[0].getFormatData(function(data) {
if(data.duration > 30) {
/* Tell the user the video is too long */
} else {
/* Video is less than the max duration...all good */
}
});
}, function(error) { /* An error occured */ },
null);

How to flush CocoaAsyncSocket?

Is there a way to flush the internal buffers/queues/etc. of CocoaAsyncSocket (GCDasyncSocket)?
I want to setup the environment that when I call the [... readDataWithTimeout ..] method, I don't want it to read any left over garbage data from a previous connection that might have been buffered and sitting somewhere.
Or sometimes I just want to ignore a stream of data and start over, so want to ensure a clean pipe.
I've also asked this question in the CocoaAsyncSocket Google Group and Robbie Hanson was kind enough to elaborate with his answer.
Make sure you send \n or \r after you write data.
I have write a socket manager class you can check it here: CocoaAsyncSocketManager
Sample code:
func sendMessage(messageString msg: String) {
if let _data = msg.data(using: .utf8), let _carriageReturn = "\r".data(using: .utf8) {
tcpSocket?.write(_data, withTimeout: -1, tag: 0)
tcpSocket?.write(_carriageReturn, withTimeout: -1, tag: 99)
}
}