Simulate touch on ios7/8 [duplicate] - objective-c

Is possible to simulate a touch event in specific screen's coordinates pressing a specific key on a fisical external keyboard (usb via camera connection kit or bluetooth) on ios jailbroken and all the elements that jailbreak involves?
I would use this to press a button in an app (amplitude) with my foot, i want to use a keyboard as a footswitch.
Just for a private use. No Appstore or Cydia.
Thanks.

You can write scripts on your computer and use your keyboard and mouse to control your iOS device based on the simulate touch library.
iOS13-SimulateTouch is an open-source library that allows you to simulate touch events at the system level. You can write scripts using any programming languages to simulate touch events on your iOS devices either remotely or locally. Please check the source code at [Github] iOS13-SimulateTouch
IOS13-SimulateTouch
iOS 11.0 - 13.6 system-level touch simulation iOS13 simuate touch
Jailbroken device required
Description
This library enables you to simulate touch events on iOS 11.0 - 13.6 with just one line of code! All the source code will be released later.
Features
Multitouch supported (no other library you can find supports multi touching).
Programmable. Control scripts can be programmed with all the programming languages you desire.
Instant controlling supported. The ios device can be controlled with no latency from other devices/computers.
System-level touch simulation (will not inject to any process).
Installation
Open Cydia - Sources - Edit - Add - http://47.114.83.227 ("http" instead of "https"!!! Please double check this.)
Install "ZJXTouchSimulation" tweak
Done
Code Example
Python Version
import socket
import time
# event types
TOUCH_UP = 0
TOUCH_DOWN = 1
TOUCH_MOVE = 2
SET_SCREEN_SIZE = 9
# you can copy and paste this method to your code
def formatSocketData(type, index, x, y):
return '{}{:02d}{:05d}{:05d}'.format(type, index, int(x*10), int(y*10))
s = socket.socket()
s.connect(("127.0.0.1", 6000)) # connect to the tweak. Replace "127.0.0.1" with the ip address of your device
s.send(("1"+formatSocketData(SET_SCREEN_SIZE, 0, 2048, 2732)).encode()) # tell the tweak that the screen size is 2048x2732 (your screen size might differ). This should be send to the tweak every time you kill the SpringBoard (just send once)
time.sleep(1) # sleep for 1 sec to get setting size process finished
s.send(("1"+formatSocketData(TOUCH_DOWN, 7, 300, 400)).encode()) # tell the tweak to touch 300x400 on the screen
# IMPORTANT: NOTE the "1" at the head of the data. This indicates the event count and CANNOT BE IGNORED.
s.close()
Actually the touch is performed by only one line:
s.send(("1"+formatSocketData(TOUCH_DOWN, 7, 300, 400)).encode())
Neat and easy.
Perform Touch Move
s.send(("1"+formatSocketData(TOUCH_MOVE, 7, 800, 400)).encode()) # tell the tweak to move our finger "7" to (800, 400)
Perform Touch Up
s.send(("1"+formatSocketData(TOUCH_UP, 7, 800, 400)).encode()) # tell the tweak to touch up our finger "7" at (800, 400)
Combining them
s.send(("1"+formatSocketData(TOUCH_DOWN, 7, 300, 400)).encode())
time.sleep(1)
s.send(("1"+formatSocketData(TOUCH_MOVE, 7, 800, 400)).encode())
time.sleep(1)
s.send(("1"+formatSocketData(TOUCH_UP, 7, 800, 400)).encode())
First the finger touches (300, 400), and then it moves to (800, 400), and then "leaves" the screen. All the touch events are performed with no latency.
Usage
After installation, the tweak will start listening at port 6000.
Use socket to send touch data field to the tweak
data field should always be decimal digits, specified below
NOTE: Usage might be updated in the future. I will update this post but please keep track on github at Usage_iOS13-SimulateTouch
Event Count(1 digit): Specify the count of the single events. If you have multiple events to send at the same time, just increase the event count and append events to the data.
Inside a single event
Type(1 digit): Specify the type of the single event.
Supported event type:
Event: Touch Up. Flag: 0. Description: Specify the event as a touch-up event
Event: Touch Down. Flag: 1. Description: Specify the event as a touch-down event
Event: Touch Move. Flag: 2. Description: Specify the event as a touch-move event (move the finger)
Event: Set Size. Flag: 9. Description: Set screen size (required!! Will be explained below)
Touch Index(2 digits): Apple supports multitouch, so you have to specify the finger index when posting touching events. The range of finger index is 1-20 (0 is reserved, don't use 0 as finger index).
x Coordinate(5 digits): The x coordinate of the place you want to touch. The first 4 digit is for integer part while the last one is for the decimal part. For example, if you want to touch (123.4, 2432.1) on the screen, you should fill "01234" for this.
y Coordinate(5 digits): The y coordinate of the place you want to touch. The first 4 digit is for integer part while the last one is for the decimal part. For example, if you want to touch (123.4, 2432.1) on the screen, you should fill "24321" for this.
Moreover
So if you want to touch down (123.4, 1032.1) with finger "3" on the screen, just connect to the tweak using socket and send "11030123410321".
Digit 0: "1" indicates there is only one event to perform.
Digit 1: "1" indicates event type: TOUCH_DOWN (the flag is 1).
Digit 2-3: "03" indicates this event is performed by finger "3".
Digit 4-8: "01234" indicates the x coordinate 123.4.
Digit 9-13: "10321" indicates the y coordinate 1032.1.
Important Note
The touch coordinate does not depend on the orientation of your device. See picture below to get more information. However you place your device, the click point on the screen will not be changed.

Related

How to capture App Screen as Video with Audio in Mac OSX?

I am writing a MacOS or OSX application where I need to record only the View of my application (Not the Whole display) with the Audio it emits.
Think it as a game app and I need to record the complete GamePlay View Of the Application.How should I go about doing this?
I am aware of "AVCaptureScreenInput" and, the example. But how to capture only the view of my application?
From the website you posted:
Note: By default, AVCaptureScreenInput captures the entire screen. You may set its cropRect property to limit the capture rectangle to a subsection of the screen.
Just set this property to the windows/views rect and you're done
Of course you need to update and restart the recording when the windows/views rect changes.
Read the document carefully, there is a comment, about the displays:
// If you're on a multi-display system and you want to capture a secondary display,
// you can call CGGetActiveDisplayList() to get the list of all active displays.
// For this example, we just specify the main display.
// To capture both a main and secondary display at the same time, use two active
// capture sessions, one for each display. On Mac OS X, AVCaptureMovieFileOutput
// only supports writing to a single video track.

Selenium: resize window in a tiling window manager

I'm using a tiling window manager i3 to run selenium tests. Sometimes I run tests using Chrome & Firefox besides PhantomJS. I realize that one cannot resize an i3 window that's tiled, so I wonder what, if any, workarounds are there to resize window, or detecting if the window is currently tiled and setting it floating?
I realize that one option would be setting chrome and firefox to always run floating in i3 config, but that would impede my regular workflow.
You actually can resize tiled windows, either with the mouse or with the resize command:
resize <grow|shrink> <up|down|left|right|height|width> [<px> px [or <ppt> ppt]]
This grows or shrinks the window in the given direction. With the optional parameters you can specify by what amount the window should be resized. The first argument - <px> px - is used for floating windows and defaults to 10 px. The second argument - <ppt> ppt is for tiled windows and gives the amount in percentage points of the parent container, it defaults to 10 ppt.
In the default configuration there is a resize mode that can be reach with Mod+r (with Mod being either Alt or Super, depending on the choice when first starting i3). While in this mode, windows - floating and tiled - can be grown with Down or Right and shrunken with Up and Left.
For example: Say, you have a container with 3 equally wide windows side-by-side - each taking 33.33 % of the containers width. If you run resize grow width on the middle one, it will be resized to take 43.33 % of the parent container, while the windows to either side will be shrunken to 28.33 %.
As for setting a window to floating, you can always do that with the command floating enable, so there is actually no need to query the current floating state.
If you really want to, you could query this information from i3's IPC interface.
Proof of concept:
Use i3-msg -t get_tree to retrieve the JSON-encoded layout tree (this gets you the whole JSON in one line, so maybe you want to put it through json_xs, json_pp or some similar program for more human-friendliness).
Get the ID of the current window with xdotool getactivewindow
Look in the tree for a node with attribute "window": <WINDOWID>
Check whether the attribute "floating" is set to "user_on" (or possibly "<ANYTHING>_on", I am not sure about that)

Serious performance problems with Qt5 and X11

We ported our application from Qt3 to Qt5. It runs smoothly under Windows but not under Linux (X11). With Qt3 there is no problem with Windows or Linux.
Inside the application there is a canvas of about 1000x800 pixels. A simple vector graphic is drawn onto the canvas. The user clicks into the canvas, holding the mouse button pressed an moves the mouse. Each mouse move results in a repaint.
We registered the milliseconds in each stage:
Start of MouseMove-event handling: 10581
call of update or repaint (makes no difference which one)
Handling of resulting Paint-Event: 10583
Painting finishes: 10584
return from update/repaint: 10687 (!)
I do not find any reason for this lag of 100ms (at each mouse move event!)
I need help!
In Qt4.8 the native graphics backend was deprecated.
Remote X11 is no longer drawn with X11 calls but by painting onto a canvas and transmitting the result (a bitmap) to the client. This may result in larger bandwidth requirements and a slower when running X11 over network.
See also this

How to monitor for swipe gesture globally in OS X

I'd like to make an OSX application that runs in the background and performs some function when a swipe down with four fingers is detected on the trackpad.
Seems easy enough. Apple's docs show almost exactly this here. Their example monitors for mouse down events. As a simple test, I put the following in applicationDidFinishLaunching: in my AppDelegate.
void (^handler)(NSEvent *e) = ^(NSEvent *e) {
NSLog(#"Left Mouse Down!");
};
[NSEvent addGlobalMonitorForEventsMatchingMask:NSLeftMouseDownMask handler:handler];
This works as expected. However, changing NSLeftMouseDownMask to NSEventMaskSwipe does not work. What am I missing?
Well, the documentation for NSEvent's +addGlobalMonitorForEventsMatchingMask:handler: gives a list of event it supports and NSEventMaskSwipe is not listed so... it's to be expected that it not work.
While the API obviously supports the tracking of gesture locally within your own application (through NSResponder), I believe gestures can't be track globally by design. Unlike key combinations, there are much lower forms/types of gestures... essentially only:
pinch in/out (NSEventTypeMagnify)
rotations (NSEventTypeRotation)
directional swipes with X amount of fingers (NSEventTypeSwipe)
There's not as much freedom. With keys, you have plenty of modifiers (control, option, command, shift) and the whole alphanumeric keys making plenty of possible combinations so it'd be easier to reduce the amount of conflicts with local-events and global-events. Similarly, mouse events are region-based; clicking in one region can easily be differenciated from clicking in another region (from both the program's and user's point-of-view).
Because of this lower possible combination of touch events, I believe Apple might purposely be restricting global (as in, one app, responding to one or more gestures for the whole system) usage for its own usage (Mission Control, Dashboard, etc.)

NSScreenNumber changes (randomly)?

In my application I need to distinguish between different displays, which I do by using the NSScreenNumber key of the deviceDescription dictionary provided by NSScreen. So far everything worked flawlessly, but now all of a sudden I sometimes get a different screen ID for my main screen (it's a laptop and I haven't attached a second screen in months, its always the same hardware). The id used to be 69676672 but now most of the time I get 2077806975.
At first I thought I might be misinterpreting the NSNumber somehow, but that doesn't seem to be the case, I also checked by using the CGMainDisplayID() function and I get the same value. What is even weirder is that some of the Apple applications still seem to get the old ID: Eg. the desktop image is referenced in its config file using the screen ID and when updating the desktop image the desktop image app by Apple uses the "correct" (=old) ID.
I am starting to wonder if there might have been a change in a recent update (10.7.1 or 10.7.2) that led to the change, has anybody else noticed something similar or had this issue before?
Here is the code that I use:
// This is in an NSScreen category
- (NSNumber *) uniqueScreenID {
return [[self deviceDescription] objectForKey:#"NSScreenNumber"];
}
And for getting an int:
// Assuming screen points to an instance of NSScreen
NSLog(#"Screen ID: %i", [[screen uniqueScreenID] intValue]);
This is starting to get frustrating, appreciate any help/ideas, thanks!
For Mac's that have built-in graphics and discrete graphics cards (such as MacBook Pro models with on-board Intel graphics and a separate graphics card), the display ID can change when the system automatically switches between the two. You can disable "Automatic graphics switching" in the Energy Saver prefs panel to test whether this is the cause of your screen number changes (when disabled, will always use the discrete graphics card).
On such systems, the choice of which graphics is in use at a particular time is tied to the applications that are currently running and their needs. I believe any use of OpenGL by an application would cause a switch to the discrete graphics card, for instance.
If you need to notice when such a switch occurs while your application is running, you can register a callback (CGDisplayRegisterReconfigurationCallback) and examine the changes that occur (kCGDisplayAddFlag, kCGDisplayRemoveFlag, etc). If you're trying to match a display to one previously used/encountered, you would need to go beyond just comparing display id's.