I'm planning on showing my sound work for a show, I'm just wondering if it's possible to control the lights getting darker and brighter slowly?
It starts from the pitch black at the beginning, and getting brighter and darker, and it turned back to pitch black at the end of sound.
I have no experiences about it.
If you're using the Hue API to change the state of a light or of a group of lights (links require you create a free Hue developer account to access), you can set a transitiontime property. This will cause the light to smoothly transition from its current state to the chosen state over that time period. This way you'd only need to send commands to the lights when you want them to start a new transition.
Note however that you will have trouble doing a transition from complete darkness: the lowest brightness for Hue bulbs is nowhere near pitch black, so you'd notice the jump from "off" to "brightness 1".
There is also a second Entertainment API that supports streaming light changes (i.e. up to about 10 times a second) rather than relying on transitions. This is somewhat more involved though.
Related
I have been playing around with pyautogui before switching to pydirectinput in order to automate things in Minecraft. I'm making a mining bot and I'm running into some issues involving automated mouse movement in the game. I'm using the moveRel() function, although I have used move() and moveTo(), they produced the same result as moveRel(), to move the player's head up and down. However, even when I put the Offsets to a really low amount like 1, the player's head rotates in a full range of motion. To help you visualize this, in Minecraft, picture your character staring off into the horizon. Now imagine what would happen if you suddenly jerked the mouse back. The player would face down right? Well, every time I try moving the mouse a little bit using pydirectinput, the player always ends up facing down. What is causing the player to look down as if its camera were anchored when I use the mouse moving function in pydirectinput?
I Solved My Problem. It turns out that I need to turn on Raw Input so that the mouse wouldn't accelerate so much. Raw Input uses the raw mouse movement from your computer meaning that it does not accelerate or deaccelerate the mouse input to match the game sensitivity. I think that's how it works. By the way, raw input is in the mouse control settings in Minecraft. Anyway, because of the acceleration of my mouse input, the simulated mouse movement from my pydirectinput script was too sensitive for the game, so that's why the player always looked downwards no matter what numbers I put into the moveRel() function.
I'm having a terrible time trying to figure out what's going on with my baked lighting. It appears that only Realtime lights affect my model. I've attached 2 images to demonstrate the problem. I have several point lights in the interior of my model. If I set them to Realtime everything looks great. However, if I set the, to Baked and change the GI accordingly they don't seem to interact with the model at all. Oddly enough the Directional Light on the exterior (and you can see it poking through the hallway door) Seems to display fine when set to Baked.
The model is generated in Blender and I do have the "Generate Lightmap UVs" import option selected. I've tried just about every combination of settings I can think of.
It turns out the interior lights were just a few pixels above the surface of my ceiling cube, causing the light to never reach the interior of the room :/
I'm creating a zombie preparedness app for iOS and I thought it would be cool to have an "Apocalypse mode" which is similar to Airplane mode in that it replaces the status bar carrier icon with a little airplane except possibly with a little mushroom cloud or something instead?
Apocalypse mode would just be a boolean flag in my app the disables all data connection required features (only within the app, not using any private APIs or anything...). If possible, I would still like to have the clock, battery life, Bluetooth icons and whatever else that pops up onto the status bar during normal operation.
I'm looking at the MTStatusBarOverlay library to implement this feature. Related (Stackoverflow post here). I know there is a possibility my app could get rejected for style because of this, but my thought is that I don't want to stray to far from the norm and cross my fingers Apple doesn't jump on me for it.
My question is
How can I copy over the clock and battery life icons? Do I need to hook into an event or is there a UI element I can add.
Am I going about this the right way? Would it be better to just make a transparent overlay on top of the normal status bar with a mushroom cloud that overlays the carrier icon instead of replacing the status bar entirely? I'm worried about variable length carrier icons...
Of course option 3 is I just forget that idea entirely and make some sort of different background or something for this mode, but that seems lame :P
I had a go with something similar a while ago. I created a status bar overlay that accepted touch events, but didn't block the status bar from receiving touches, which is crucial for app store acceptance.
You can check out my question and my answer, however keep in mind it might not be actual anymore, it worked great in iOS4, but never tested it on 5. Worth a try though.
As for the overlay itself, I suggest covering everything up to the clock, and leaving the rest transparent, it should do the job.
For the past few months I've been looking into developing a Kinect based multitouch interface for a variety of software music synthesizers.
The overall strategy I've come up with is to create objects, either programatically or (if possible) algorithmically to represent various controls of the soft synth. These should have;
X position
Y position
Height
Width
MIDI output channel
MIDI data scaler (convert x-y coords to midi values)
2 strategies I've considered for agorithmic creation are XML description and somehow pulling stuff right off the screen (ie given a running program, find xycoords of all controls). I have no idea how to go about that second one, which is why I express it in such specific technical language ;). I could do some intermediate solution, like using mouse clicks on the corners of controls to generate an xml file. Another thing I could do, that I've seen frequently in flash apps, is to put the screen size into a variable and use math to build all interface objects in terms of screen size. Note that it isn't strictly necessary to make the objects the same size as onscreen controls, or to represent all onscreen objects (some are just indicators, not interactive controls)
Other considerations;
Given (for now) two sets of X/Y coords as input (left and right hands), what is my best option for using them? My first instinct is/was to create some kind of focus test, where if the x/y coords fall within the interface object's bounds that object becomes active, and then becomes inactive if they fall outside some other smaller bounds for some period of time. The cheap solution I found was to use the left hand as the pointer/selector and the right as a controller, but it seems like I can do more. I have a few gesture solutions (hidden markov chains) I could screw around with. Not that they'd be easy to get to work, exactly, but it's something I could see myself doing given sufficient incentive.
So, to summarize, the problem is
represent the interface (necessary because the default interface always expects mouse input)
select a control
manipulate it using two sets of x/y coords (rotary/continuous controller) or, in the case of switches, preferrably use a gesture to switch it without giving/taking focus.
Any comments, especially from people who have worked/are working in multitouch io/NUI, are greatly appreciated. Links to existing projects and/or some good reading material (books, sites, etc) would be a big help.
Woah lots of stuff here. I worked on lots of NUI stuff during my at Microsoft so let's see what we can do...
But first, I need to get this pet peeve out of the way: You say "Kinect based multitouch". That's just wrong. Kinect inherently has nothing to do with touch (which is why you have the "select a control" challenge). The types of UI consideration needed for touch, body tracking, and mouse are totally different. For example, in touch UI you have to be very careful about resizing things based on screen size/resolution/DPI... regardless of the screen, fingers are always the same physical size and people have the same degreee of physical accuracy so you want your buttons and similar controls to always be roughly the same physical size. Research has found 3/4 of an inch to be the sweet spot for touchscreen buttons. This isn't so much of a concern with Kinect though since you aren't directly touching anything - accuracy is dictated not by finger size but by sensor accuracy and users ability to precisely control finicky & lagging virtual cursors.
If you spend time playing with Kinect games, it quickly becomes clear that there are 4 interaction paradigms.
1) Pose-based commands. User strikes and holds a pose to invoke some application-wide or command (usually brining up a menu)
2) Hover buttons. User moves a virtual cursor over a button and holds still for a certain period of time to select the button
3) Swipe-based navigation and selection. User waves their hands in one direction to scroll and list and another direction to select from the list
4) Voice commands. User just speaks a command.
There are other mouse-like ideas that have been tried by hobbyists (havent seen these in an actual game) but frankly they suck: 1) using one hand for cursor and another hand to "click" where the cursor is or 2) using z-coordinate of the hand to determine whether to "click"
It's not clear to me whether you are asking about how to make some existing mouse widgets work with Kinect. If so, there are some projects on the web that will show you how to control the mouse with Kinect input but that's lame. It may sound super cool but you're really not at all taking advantage of what the device does best.
If I was building a music synthesizer, I would focus on approach #3 - swiping. Something like Dance Central. On the left side of the screen show a list of your MIDI controllers with some small visual indication of their status. Let the user swipe their left hand to scroll through and select a controller from this list. On the right side of the screen show how you are tracking the users right hand within some plane in front of their body. Now you're letting them use both hands at the same time, giving immediate visual feedback of how each hand is being interpretted, and not requiring them to be super precise.
ps... I'd also like to give a shout out to Josh Blake's upcomming NUI book. It's good stuff. If you really want to master this area, go order a copy :) http://www.manning.com/blake/
I've had theft problems outside my house so I setup a simple webcam to capture every second with Dorgem (http://dorgem.sf.net).
Dorgem does offer a feature to use motion detection to only capture frames where something is moving on the screen. The problem is that the motion detection algorithm it uses is extremely sensitive. It goes off because of variations in color between successive shots on my cheap webcam, and it also goes off because the trees in front of the house are blowing in the wind. Additionally, the front of my house is a high traffic area so there is also a large number of legitimately captured frames.
I average capturing 2800/3600 frames every second using Dorgem's motion detection. This is too much for me to search through to find out where the interesting activity is.
I wish I could re-position the camera to a more optimal position where it would only capture the areas I'm interested in, so that motion detection would be simpler, however this is not an option for me.
I think that because my camera has a fixed position and each picture frames the same area in front of my house, then I should be able to scan the images and figure out which ones have motion in some interesting region of that image, throwing out all other frames.
For example: if there's a change in pixel 320,240 then someone has stepped in front of my house and I want to see that frame, but if there's a change in pixel 1,1 then its just the trees blowing in the wind and the frame can be discarded.
I've looked at pdiff, a tool for finding diffs in sets of pictures, but it seems to be also focused on diffing the entire picture, rather than a specific region of it:
http://pdiff.sourceforge.net/
I've also looked at phash, a tool for calculating a hash based on human perception of an image, but it seems too complex:
http://www.phash.org/
I suppose I could implement it in a shell script using imagemagick's mogrify -crop to cherry pick the regions of the image I'm interested in, then running pdiff to find the interesting ones, and using that to pick out the interesting frames.
Any thoughts? ideas? existing tools?
cropping and then using pdiff seems like the best choice to me.