I'm noticing that the only platform HockeyApp doesn't intuitively allow attachments for Feedback right now is Mac. Does anyone have insight as to how I can append a simple string to a feedback message or even an NSData blob?
Looking through the documentation I see two protocols that can be implemented.
1) BITHockeyManagerDelegate
2) BITCrashManagerDelegate
Whats puzzling me is that neither of these have a reference to the FeedbackManager. The only thing that's somewhat relevant is the CrashManagerDelegate's mention of
- (BITHockeyAttachment *)attachmentForCrashManager:(BITCrashManager *)crashManager Though, this seems to only be called when sending a crash report. Perhaps I'm wrong?
The HockeyApp git repo seems to have a class, BITFeedbackManagerPrivate.h, containing the functionality I'm looking for, but is inaccessible from HockeyApp's pre-built .dylib.
After building from source, seemingly no other class exposes it's methods, so I'm trying to figure out if this feature has been implemented yet or if I'm missing something.
Any insight is greatly appreciated. Cheers,
Zack
The repository doesn't have a feature for that at the moment, not even the Private headers. The features are available on the iOS SDK and will soon be brought over to the Mac SDK. So far the demand for things like this had been very low so the priority wasn't that high.
Please file a ticket with support or the GitHub repository for such requests, so we can quickly answer and react and don't have to search StackOverflow :)
Thanks!
Related
I've got a big ask here, but I am hoping someone might be able to help me. If there's another site you think this should be posted on, please let me know.
I'm the developer of the free app Amphetamine for macOS and I'm hoping to add a new feature to the app - keeping a Mac awake while in closed-display (clamshell) mode while not having a keyboard/mouse/power adapter/display connected to the Mac. I get requests to add this feature on an almost daily basis.
I've been working on a solution (and it's mostly ready) which uses a non-App Store helper app that must be download and installed separately. I could still go with that solution, but I want to explore one more option before pushing the separate app solution out to the world.
An Amphetamine user tipped me off that another app, AntiSleep can keep a Mac awake while in closed-display mode, while not meeting Apple's requirements. I've tested this claim, and it's true. After doing a bit of digging into how AntiSleep might be accomplishing this, I've come up with 2 possible theories so far (though there may be more to it):
In addition to the standard power assertion types, it looks like AntiSleep is using (a) private framework(s) to apply non-standard power assertions. The following non-standard power assertion types are active when AntiSleep is keeping a Mac awake: DenySystemSleep, UserIsActive, RequiresDisplayAudio, & InternalPreventDisplaySleep. I haven't been able to find much information on these power assertion types beyond what appears in IOPMLibPrivate.h. I'm not familiar at all with using private frameworks, but I assume I could theoretically add the IOPMLibPrivate header file to a project and then create these power assertion types. I understand that would likely result in an App Store review rejection for Amphetamine, of course. What about non-App Store apps? Would Apple notarize an app using this? Beyond that, could someone help me confirm that the only way to apply these non-standard power assertions is to use a private framework?
I suspect that AntiSleep may also be creating a virtual keyboard and mouse. Certainly, the idea of creating a virtual keyboard and mouse to get around Apple's requirement of having a keyboard and mouse connected to the Mac when using closed-display mode is an intriguing idea. After doing some searching, I found foohid. However, I ran into all kinds of errors trying to add and use the foohid files in a test project. Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app? I'm not asking for code help with that (yet). I'd just like some help determining whether it might be possible to do.
Thank you in advance for taking a look.
Would Apple notarize an app using this?
I haven't seen any issues with notarising code that uses private APIs. Currently, Apple only seems to use notarisation for scanning for inclusion of known malware.
Would someone be willing to take a look at the foohid project and help me understand whether it is theoretically possible to include this functionality in an App Store compatible app?
Taking a quick glance at the code of that project, it's clear it implements a kernel extension (kext). Those are not allowed on the App Store.
However, since macOS 10.15 Catalina, there's a new way to write HID drivers, using DriverKit. The idea is that the APIs are very similar to the kernel APIs, although I suspect it'll be a rewrite of the kext as a DriverKit driver, rather than a simple port.
DriverKit drivers are permitted to be included in App Store apps.
I don't know if a DriverKit based HID driver will solve your specific power management issue.
If you go with a DriverKit solution, this will only work on 10.15+.
I suspect that AntiSleep may also be creating a virtual keyboard and mouse.
I haven't looked at AntiSleep, but I do know that in addition to writing an outright HID driver, it's possible to generate HID events using user space APIs such as IOHIDPostEvent(). I don't know if those are allowed on the App Store, but as far as I'm aware, IOKitLib is generally fine.
It's possible you might be able to implement your virtual input device using those.
Since a while, but without success, I'm trying to achieve a cross-platform solution that makes me able to use a custom camera with custom functionalities. However, no one on the internet seems to get it done over each platform (Often, only Android & iOS are implemented, but no UWP) and I still don't understand why...
I've been searching for the past months how to make something, like a service, a dependency service like, from which you can get the stream/frames of the camera. Once you get it, be able to put it into an Xamarin.Forms.Image.
The principle of this conception would allow developers to implement functions, inside of the dependency service, such as taking video or taking pictures from the native stream camera.
You could say "But you can already use NuGet as Xam.Plugin.Media from James Montemagno.". Yes, but with his package, you call the native built-in camera so you can't implement your own design or your own functionalities..
So my question is: "Does someone has any tips or any project that can help to realize this project/idea?". If I can make it work, then I will create a project on my public GitHub, in order to help future people who would like to realize it.
Thank for any help
PS: There is some results about some researches I made: https://forums.xamarin.com/discussion/comment/284359/#Comment_284359
This article looks to be similar to what you are after:
Full Page Camera in Xamarin
It derives a camera page from ContentPage then creates platform specific custom renderers based on PageRenderer.
Bonus - there is source code on GitHub
I want to use Realm in a react native application I am developing. While going through the documents, I noticed that core DBEngine is not open source (The license is Apache 2.0 though). Does it mean I can't use it without contacting realm? How does a combination of opensource and "not open-source" license work? I am a little confused here. Can someone please help me with this?
Thanks.
According to the FAQ it is totally free:
Do I have to pay to use Realm?
No, Realm is entirely free to use, even in commercial projects.
https://realm.io/docs/swift/latest/#faq
Since I haven't got any response on the Unity3d or Evernote forums, I'll try it here.
The last year I have worked a lot with Unity3D, mostly because the good integration with the Vuforia Augmented Reality library and the fact that publishing for multiple platforms is a piece of cake.
Now I want to show notes in an AR setting and am looking at the Evernote API for this. I couldn't find anything about using this with Unity, I can see why this is not the most common combination.
My question is: do you think I can access the Evernote API through Unity? If so, how should I do this? Or is it for this purpose perhaps wiser to make (parts of) the application with Eclipse/xCode?
Hope to hear from you!
Link to Evernote API: http://dev.evernote.com/doc/
The Evernote API has a C# SDK which you should be able to call through Unity. In terms of how to do it, you will probably need to download the SDK and follow the instructions yourself. Their github seems like a good starting point.
One thing to note is that Unity's .Net library for mobile clients are quite limited and with webplayer you will need to deal with sandbox security issues. But start with the standalone build first and see how you go
In the video there were several devices all updating simultaneously with each save of the file being edited. I assume this is using a grunt server command.
This seems the perfect setup for testing - I didn't see a TV but that would be useful too, and easy enough.
Is there any documentation for this, or an answer if anyone knows? I will return and post if I find anything.
Thanks.
Paul