I have an iPad kiosk that runs a single app, I would like to conserve power by having it sleep/lock or go to a low power state (dim the screen) either on regular scheduled times (outside of business hours) or when idle for specified period of time. The low power state on scheduled intervals will be good enough if it is simpler, as they are usually in pretty consistent use during the day.
Any tutorials out there that show how to accomplish this in Objective-C, or perhaps a few snippets to get me started?
The kiosks physically prohibit use of any of the buttons. I looked into iOS 6 Guided Access, but I don't see how a user could unlock/wake the screen without access to the buttons. Another solution could be Kiosk Pro Plus - but $40 bucks a pop just for sleeping doesn't seem worth it, I'd like to program it myself if feasible.
MokiTouch is a free alternative to KioskPro and has the sleep feature. You can pay to add remote management, but that's optional.
MokiTouch.com for more info.
Related
I am creating an application for Mac, in Objective C, which will run in the Menu-bar and do periodic Desktop operations (such as changing the wallpaper). I am creating the application so that it stays in the Menu bar at all times, allowing easy access to configuration options and other information. My main concern is how to schedule my app to run every X minutes to do the desktop operations.
The most common solution I have seen is using NSTimer, however, I am concerned that it will not be memory efficient (after reading the following page on Apple Developer docs. Using an NSTimer will prevent the laptop from going to sleep, and will need an always-running thread to check for when the NSTimer has elapsed. Is there a more memory-efficient way of using NSTimer to schedule these operations?
Alternately, is there a way to use LaunchD to initiate a call to my application (which is in the Menu bar) so that it can handle the event and do the desktop operations. I think that the second way is better, but am not sure if it is possible.
First, excellent instincts on keeping this low-impact. But you're probably over-worried in this particular case.
When they say "waking the system from an idle state" they don't mean system-level "sleep" where the screen goes black. They mean idle state. The CPU can take little mini-naps for fractions of a second when there isn't work that immediately needs to be done. This can dramatically reduce power requirements, even while the system is technically "awake."
The problem with having lots of timers flying around isn't so much their frequencies as their tolerances. Say one you have 10 timers with a 1 second frequency, but they're offset from each other by 100ms (just by chance of what time it was when they happened to start). That means the longest possible "gap" is 100ms. But if they were configured at 1 second with a 0.9 second tolerance (i.e. between 1s and 1.9s), then the system could schedule them all together, do a bunch of work, and spend most of the second idle. That's much better for power.
To be a good timer citizen, you should first set your timer at the interval you really want to do work. If it is common for your timer to fire, but all you do is check some condition and reschedule the timer, then you're wasting power. (Sounds like you already have this in hand.) And the second thing you should do is set a reasonable tolerance. The default is 0 which is a very small tolerance (it's not actually "0 tolerance," but it's very small compared to a minutes). For your kind of problem, I'd probably use a tolerance of at least 1s.
I highly recommend the Energy Best Practices talk from WWDC 2013. You may also be interested in the later Writing Energy Efficient Code sessions from 2014 and Achieving All-day Battery Life from 2015.
It is possible of course to do with with launchd, but it adds a lot of complexity, especially on installation. I don't recommend it for the problem you're describing.
I am evaluating some code that is stacking calls to beginBackgroundTaskWithExpirationHandler in a effort to leave a timer in the background. Have to admit that it is a pretty clever idea, but not sure if this is best practice.
So the flow:
Call beginBackgroundTaskWithExpirationHandler with a callback handler
When it returns, do something, then call again
Rinse and repeat, checking for TaskInvalid along the way
I know that 180 seconds is the max time, but that this can be shorter in some cases.
To the questions:
1: Is this legal?
2: Would you suspect that Apple would be OK with giving the app 3 minutes of background over and over, thus leaving the process in the background for say a hour?
3: Would you count on this?
Thanks in advance!
Would you suspect that Apple would be OK with giving the app 3 minutes of background over and over, thus leaving the process in the background for say a hour?
No, Apple would not be OK with it, even if you could do it. They specified a 3 minute limit for a reason, to ensure that we don't have apps running in the background without the user's knowledge, consuming CPU cycles, memory, and draining the battery. (In fact, the "finite task" limit used to be 5 minutes, but several years ago Apple further restricted it to just 3 minutes.) Imagine a world where all app developers were routinely circumventing this 3 minute limit, our devices would have their batteries drained quickly and foreground apps would be less responsive and have less memory with which to operate.
Having said that, Apple has identified a very narrow set of operations that are acceptable to keep running in the background (VOIP, navigation apps, music apps, bluetooth operation, etc.), where the user has reasonable expectations about the battery and performance impact.
There are are also classes of tasks that employ some limited background capabilities (e.g. requesting time to complete some finite-length task, opportunistic periodic background fetch, significant change location services, giving the user a chance to respond to push or local notifications, etc.). The intent is to offer a meaningful balance between background capabilities while minimizing battery impact.
Bottom line, Apple otherwise discourages/curtails the use of indiscriminate background operation. In the App Store Guidelines, they explicitly say
2.16 Multitasking Apps may only use background services for their intended purposes: VoIP, audio playback, location, task completion, local notifications, etc.
...
4.5 Apps using background location services must provide a reason that clarifies the purpose of the use, using mechanisms described in the Human Interface Guidelines
Having said all of this, if you describe what precisely you need this background operation for, we might be able to describe which of the multitude of different background capabilities that Apple offers you could use to achieve the desired affect. All of these interfaces are designed to solve specific problems while balancing functionality with the scarce resources on our devices. But if it's something like "hey, I want to ping my server every five minutes", then no, Apple will frown upon that.
For more information, this is discussed in some detail in the Background Execution chapter of the App Programming Guide for iOS.
I'm working on an arcade cabinet that will be able to play various video game consoles (real hardware, not emulated.) There will be a PC inside to run a selection menu. I'll have to write that myself. I'll also need program a PLC which will do various things like control the relays which switch audio/video/controls between the PC and the various consoles, etc. I'll need help with those two tasks in time, but they are not what I'm working on right now.
What I'm working on as a starting point has to do with the controller encoding. Basically, the controls for each player consist of a few buttons and a joystick. These use momentary, normally-open contact switches, one for each button, and one for each cardinal direction on the joystick. Pressing the button, or joystick direction, closes the switch. The state of the buttons is then communicated to the console by an encoder.
The encoder has a connection for each button and joystick direction which is connected to 5 volts ("high") through a pull-up resistor. When a button or direction is pressed, a connection to ground is made through the momentary switch. When the encoder reads ground ("low") on a button connection, it knows that a button has been pressed and it communicates this to the console.
I already have all this working with the various consoles, but I've thought of some features that would be nice to add. This is where my current task comes in.
The first feature is button remapping. Some of these games were designed with controllers in mind, so when you use them with an arcade control panel, some of the buttons may not be where you want them. Some games allow buttons to be remapped via software, but others do not. My idea is to add a PLC in between the joystick and buttons and the encoder. I'll call this PLC a "pre-encoder."
The pre-encoder would read the states of the buttons on some input pins, then write these states back to some output pins, relaying them to the encoder. The advantage is that its programming could associate any input pin with any output pin, effectively remapping the buttons. Whenever a console is selected via the computer's menu, a button-mapping profile associated with a particular game could be selected as well, and forwarded to the pre-encoder.
Of course, the pre-encoder's routine which reads the buttons and relays their states to the encoder must repeat very quickly for smooth control. These games will be running at about 50 to 60Hz, meaning a new a video frame every 16.67ms or less. Ideally, the pre-encoder will be able to repeat this routine many, many times per frame to ensure the absolute minimum input lag. I want to ensure that the code and hardware selection is optimized to run as fast as possible.
The second feature is turbo buttons. Some games, especially arcade games, require a fire button to be pressed repeatedly every time you want to fire your gun, or your ship's cannons, etc, even if you have unlimited ammo. This seems unnecessary, and it will tire your fingers out pretty quickly. A turbo button is one that can be held down continuously, yet the game is being told that you are rapidly pressing and releasing it. This could be done in software for anything running on the PC, or with an analog solution like a 555 timer, but the best method is to synchronize the turbo button timing with the video refresh rate. By feeding the vertical sync pulse from the PC or video game console's video output to a PLC, it will know exactly how often a frame of video is rendered. Turbo button timing can then be controlled by defining, in numbers of frames, the periods when the button should be pressed and released. Timing information could also be included with the game-specific button profiles.
The third feature is slow buttons. Actually, this would probably only be applied to the joystick, but I'm referring to the switches for its cardinal directions as buttons. In certain games (it will probably only be used in shmups) it is sometimes needed to move your character (ship/plane) through very tight spaces. If movement is too fast in response to even minimal joystick input, you may go too far and crash. The idea is that, while a slow activation button is held, the joystick will be made less responsive by rapidly activating and deactivating it in the same manner as the turbo buttons.
I'm not sure if I want the pre-encoder itself to be watching the vertical sync pulse or if it will slow it down too much. My current thinking is that a seperate PLC will be responsible for general management of the cab itself; watching the "on" button, switching relays, communicating directly with the PC, watching the vertical sync pulse, etc. This will free up the pre-encoder to run more quickly.
Here is some example "code" for the pre-encoder. Obviously, it's just a rough outline of what I have in mind, as I don't even know what language it will be. This example assumes that a dedicated PLC will be used just as the pre-encoder. A separate PLC will be responsible for watching the vertical sync pulse, in addition to other tasks, like getting a game profile from the computer and passing some of that info to the pre-encoder. That PLC will know what the frame timing should be for turbo and slow functions, it will count frames, and during frames when turbo buttons should be disabled, it outputs high to a pin on the pre-encoder PCB, letting it know to disable turbo buttons. During frames when it should be enabled, it outputs low to that pin. Same idea with the slow buttons. There is also a pin which the pre-encoder checks at the end of its routine, so it can be told to stop and await a different game profile.
get info from other PLC (which got it from the computer, from a user-selected game profile):
array containing list of turbo buttons (buttons are identified by what input pin they are connected to)
array containing list of slow buttons (will probably only be the joystick directions, if any)
array containing list of slow activation buttons (should normally be only one button, if any)
array containing list of normal buttons (not turbo or slow)
array containing which output pin to use for each button (this determines remapping)
Begin Loop
if turbo pin is high
for each turbo button
output pin = high
next
else
for each turbo button
output pin = input pin
next
end if
if slow pin is high and slow activation button is pressed
for each slow button
output pin = high
next
else
for each slow button
output pin = input pin
next
end if
for each normal button
output pin = input pin
next
Restart Loop unless stop pin is low
If you've read all this, thank you for your time. So (finally), here are my questions:
What are your overall thoughts; on my idea in general, feasibility, etc.?
What kind of PLC should I use for the pre-encoder? I was originally thinking of trying an Arduino, but my reading indicates that it will be much too slow, due to its use of high-level programming libraries. I don't have a problem building my own board around another PLC.
What language should I use to program the PLC? I don't mind learning a new language. There's no time limit on this project, and I'll put it in whatever it takes to get the pre-encoder running as fast as possible.
What will I need to flash my program onto the PLC?
At run-time, how should these PLC's communicate with each other, and with the PC?
Am I asking in the right place; right forum, right section, etc.? Anywhere else I should ask?
Awaiting your response eagerly,
-Rob
I have some thoughts that might be useful to you:
What are your overall thoughts; on my idea in general, feasibility, etc.?
This project sounds like you want to cheat at Defender, like I used to do with a 555 timer chip in my Atari joystick when I was a kid.
The project is feasible but you will need a pretty fast PLC.
You might spend a lot of time making this work, like a quest.
What kind of PLC should I use for the pre-encoder? I was originally thinking of trying an Arduino, but my reading indicates that it will be much too slow, due to its use of high-level programming libraries. I don't have a problem building my own board around another PLC.
As I thought of what PLC might be fast enough, a few things came to mind.
If you use a PLC that has a task architecture, you can use an event to trigger a task on the v-sync pulse, and another event to trigger on console activity. If you use a PLC without a task architecture, the user might recognize the variable latency that will occur as the program scan moves in and out of phase with the v-sync and the activity in the game. This might not be true if the PLC is fast enough, say 1ms scan time.
Most inexpensive PLCs are never going to make it. The overhead and performance will keep most PLCs around 5-10ms per scan. However, a PC-based PLC might work well. So maybe a Beckhoff controller will work nicely. If you use something like a CX2000, it has Windows 7, USB, DVI for the user interface, and an Ethercat bus on the side to attach physical I/O cards for the controller and console connections. See about the software below. There are many non-PC-based PLCs that would work fine, but these will likely be expensive and harder to integrate.
The Arduino solution should work if you are using a fast enough model. But your development time will be higher because it doesn't come with anything but a blank screen and a bunch of libraries. Troubleshooting is much more of a pain-in-the-neck than PLCs that really shine. You'll need to plan carefully to get the Arduino to work. Also, hardware interfacing a microcontroller is harder and you'll have to manage debouncing the switches in your code. Every PLC has filtering in its inputs, and the variety of I/O makes design easy. But, the Arduino or other microcontroller is really the choice if money is an issue. A fast PLC can be real expensive ($800 to $20k, think around $1500). If you are going to build more than a few systems, the Arduino might be better.
What language should I use to program the PLC? I don't mind learning a new language. There's no time limit on this project, and I'll put it in whatever it takes to get the pre-encoder running as fast as possible.
IEC61131 is a standard for PLC programming languages. In the USA most PLCs are programmed in ladder logic because it is really easy to learn and quicker to troubleshoot and maintain in machinery. Structured text has its advantages too, particularly in performance. It looks like some amalgamation of basic/C/Java, easy to learn and looks almost like your pseudocode example. As for your project, I think it could be programmed in either language. I would never use the other IEC61131 languages for this task.
Beckhoff TwinCAT3 uses MS Visual Studio as the IDE, where you can write both the selection menu (in VB/C++/C#) and the PLC code (in IEC61131) in the same project. The runtime license for TwinCAT (on the CX2000 unit) runs in kernel mode, providing processing performance to Windows 7 whenever it is not doing something else more important. I've used a few CX1020 models and they were great performers. The scan times were around 5ms with a significant amount of code. Faster units will scan <1ms.
What will I need to flash my program onto the PLC?
PLCs don't "flash" like microcontrollers. Whatever software you use to write the software will have a way to connect to the controller. The term "go online" makes the connection. The terms "download" and "upload" refer to transferring the program between the development computer and the PLC. The term "online edit" refers to making code changes while the PLC is executing the code. When modern PLCs are powered down, they use a battery to copy program and user RAM to flash. When they power up, they copy the flash back to RAM. To make a connection to any modern PLC, you will use a USB or Ethernet cable.
At run-time, how should these PLC's communicate with each other, and with the PC?
You plan more than one PLC? A PLC connection to a PC is a complicated subject. The term "OPC Server" refers to some [expensive] software that lets your custom Windows PC application access memory in PLCs. The Beckhoff solution glues all that together nicely without buying more stuff. PLC to PLC communication is easier. The method is usually by ethernet and varies widely as to the details.
Am I asking in the right place; right forum, right section, etc.? Anywhere else I should ask?
Sure, there is some PLC activity on this forum, which appears to tend toward hardcore PC/Web/Mobile development. I come here for awesomely intelligent answers to my deeper software questions.
You could try plctalk.net, a forum that is a little more geared toward nuts-and-bolts engineers and service techs with wild connectivity and compatibility questions related to machinery and automation. You might get some blank stares about vertical sync pulses. Their skill sets revolve around an industrial paradigm, where reliability is probably their highest calling.
You might also ask questions about performance on an Arduino or Microchip/Atmel/ARM forums. If you tell them that a PLC is faster than their hardware, that will rile them up real good! They might tell you that you can get microsecond performance numbers, which you can if you are using hardware interrupts and lots of physical circuitry to make that a reality, and you are able to cope with the sleepless nights of troubleshooting.
-Dennis
Leopard 10.5.8, XCode 3.1.1; using runModalForWindow to implement (what is intended to be) a high performance mouse tracking mechanism where I have to do real-time complex bitmap modifications.
The modal loop runs, the timer fires, the mouse tracks... but the performance is abysmal, and it gets worse and worse the longer the runloop goes on. Instead of catching mouse messages every pixel or so, I get them every 5... 10.... 20 seconds.
Instruments shows that the majority of the time during this growing response bottleneck is being spent in mach_msg_trap (and yes, I have the perspective set to the running app), so the impression I am under is that it "thinks" it doesn't have any work to do, despite the fact that I'm dragging the mouse around with the button held down like a crazy person. There are no memory leaks showing up, and in my 8-core 2.8 GHZ machine, there's almost no CPU activity going on.
Again, the app is not spending much time in my code... so it's not a performance problem of mine. I've probably configured something wrong, or failed to configure it at all, or am simply approaching the whole idea wrong -- but I sure would appreciate some insight here. As it stands now, the dispatch of mouse messages and timer messages is absolutely unacceptable. You couldn't implement a crayon drawing program for someone immersed in cold molasses with the response times I'm getting.
EDIT: Some additional info: doesn't happen on my 10.5.8 macbook pro. Just the 8--core, 6-display Mac Pro. I tried taking the display code for the croprect in drawrect out, replaced it with an NSLog()... still drags on issuing mouse updates. Also tried rebooting and running without the usual complement of apps running. And with mirrored displays. No difference.
Imagine dragging a brush across the screen; at first, is paints smoothly, then gaps appear between brush placements, then they get larger, and this goes on until you're only getting one brush placement every 10 seconds. That's how this acts. Using NSlog() and various other tracking methods, I've determined that it is at least at the highest level occurring because the mouseDragged events slow down to a trickle. The question in a nutshell is, why would that happen?
Anyone?
OK, isolated it -- the problem comes from my Wacom Tablet mouse. Plug in a regular optical mouse, and everything runs great. Same thing on my Macbook pro, using the trackpad. Works fine.
The tablet is a Wacom Intuos 4 with the stock drivers as of January 2011. I'm going over to the Wacom site and reporting this next.
What a nightmare that was. I have spent over 100 hours on this, thinking I'd hosed some subtlety in the app handling, drawing, etc. Sheesh.
Are there any "best practices" for writing a power-efficient background application in Symbian?
Specifically, is there any way (i.e. API) for a Symbian app to hint the OS regarding its current state in order to reduce battery consumption?
In Android, for instance, there is the notion of Wake Locks, which prevents the device from going into standby mode - Is there anything similar in Symbian?
EDIT:
Are there any implications when running code as a separate thread with the Open-C library, and not as "native" Symbian C++, using Active Objects etc.? (the Open-C code is blocking on IO most of the time).
You can check user (in-)activity with a RTimer::Inactivity() method. This way is described in Forum Nokia Wiki page. There it's also described how you can reset inactivity timer.
You can check whether device screen is turned on or off using HAL API. See classes HAL and HALData. You may use such a call:
TInt displayState;
HAL::Get(HALData::EDisplayState, displayState);
And the displayState will hold either 0 if display is turned off or 1 in other case.
With these APIs you will know whether user is active now, so you'll be able to change behavior of your background service to reduce its power consumption.
You can also use Nokia Energy Profiler application to record power consumption of handset, with different power saving options of your background service. Also please refer to Nokia's document describing best practices to save power of device. This document is quite straightforward, but useful nonetheless.
Hope this helps.
EDIT: About separate thread and Open C. As far as I know, Open C is just a plugin and deep down all the implementations are still "native Symbian". So, as far as you avoid periodic polling of some resource and just use usual blocking IO, your code is quite same economical on power as standard Symbian Active Objects techniques (which use Symbian-specific semaphores to block threads).
I have not come across anything special in Symbain to keep the device out of stand-by mode. Basically the "best practices" would be the same as all mobile devices:
Don't loop waiting for things, always use whatever signaling services avaialble on the platform, for Symbain ActiveObjects / User::WaitForXxx
Limit the number of background threads (currently all mobile devices are still only 1 CPU...)
Don't hang onto system services, close them ASAP (this is normally my main battery drain in my mobile applications, sometimes trying to find which system service causes the most battery drain can be a real pain, WinMo is very bad for this).
For me, I find that it mostly comes down to a tradeoff between battery life and performance / responsiveness for the application. Unfortunately power that be always seem to side with the performance / responsiveness side and damn the battery drain.....
Give your application low priority (see RProcess and RThread classes). Your approach will really depend on what your background application does. These things consume most battery: radio (GSM/3G/WIFI/BlueTooth), screen backlight, file accesses.
Symbian OS will always try to put your application to sleep, you don't need to tell it to do this. Just make sure your approach gives it the opportunity to put it to sleep.
Power management is an very most important issue while developing application.
In Symbian it depends on what you are using to run background activities .
Whether you are using Thread or ActiveX control.
For Eg. you are developing application browser that you want the browser to download something then that downloading activity should go in background and able activity starts and when to show progress and when it finishes it should again come to fore end.
It depends on how you are managing thread if you are using thread. You can do like which thread to pause when the long time taking activity starts and when to resume when background activity has finishes execution..
In fact this is the very good topic u have come across
There used to be an inactivity timer which could be reset by the application. This would prevent the screen from going into any screen saver mode.
If you use the various asynchronous function in Symbian, your app will run when appropriate.
One of these methods should work depending on your needs. If you describe what you want to achieve in more detail it would be easier to help you.