Swift RC4 vs. Objective-C RC4 Performance - objective-c

I have been trying to rewrite a Rc4-algorithm from objective-c to swift, to test out apples(now old) claims, about it running a lot faster.
However there must be somewhere that I am doing something horribly wrong with these times I am getting
This is the objective c code:
+(NSString*)Rc4:(NSString*)aInput key:(NSString *)aKey {
NSMutableArray *iS = [[NSMutableArray alloc] initWithCapacity:256];
NSMutableArray *iK = [[NSMutableArray alloc] initWithCapacity:256];
for (int i = 0; i <256;i++){
[iS addObject:[NSNumber numberWithInt:i]];
}
for(short i=0;i<256;i++){
UniChar c = [aKey characterAtIndex:i%aKey.length];
[iK addObject:[NSNumber numberWithChar:c]];
}
int j=2;
for (int i=0; i<255;i++){
int is = [[iS objectAtIndex:i] intValue];
UniChar ik = (UniChar)[[iK objectAtIndex:i]charValue];
j= (j+is+ik)%256;
NSNumber *temp = [iS objectAtIndex:i];
[iS replaceObjectAtIndex:i withObject:[iS objectAtIndex:j]];
[iS replaceObjectAtIndex:j withObject:temp];
}
int i =0;
j=0;
NSString *result = aInput;
for (short x=0;x<[aInput length]; x++){
i = (i+1)%256;
int is = [[iS objectAtIndex:i]intValue];
j=(j+is)%256;
int is_i = [[iS objectAtIndex:i]intValue];
int is_j = [[iS objectAtIndex:j]intValue];
int t= (is_i+is_j)%256;
int iY = [[iS objectAtIndex:t]intValue];
UniChar ch = (UniChar)[aInput characterAtIndex:x];
UniChar ch_y=ch^iY;
//NSLog(ch);
//NSLog(iY);
result = [result stringByReplacingCharactersInRange:NSMakeRange(x,1) withString:
[NSString stringWithCharacters:&ch_y length:1] ];
}
[iS release];
[iK release];
return result;
}
This runs pretty fast compiling with -O3 I get times of:
100 runs:0.006 seconds
With key: 6f7e2a3d744a3b5859725f412f (128bit)
and input: "MySecretCodeToBeEncryptionSoNobodySeesIt"
This is my attempt to implement it in the same way using Swift:
extension String {
subscript (i: Int) -> String {
return String(Array(self)[i])
}
}
extension Character {
func unicodeValue() -> UInt32 {
for s in String(self).unicodeScalars {
return s.value
}
return 0
}
}
func Rc4(input:String, key:String)-> String{
var iS = Array(count:256, repeatedValue: 0)
var iK = Array(count:256, repeatedValue: "")
var keyLength = countElements(key)
for var i = 0; i < 256; i++ {
iS[i] = i;
}
for var i = 0; i < 256 ; i++ {
var c = key[i%keyLength]
iK[i] = c;
}
var j = 2
for var i = 0; i < 255; i++ {
var iss = iS[i]
var ik = iK[i]
// transform string to int
var ik_x:Character = Character(ik)
var ikk_xx = Int(ik_x.unicodeValue())
j = (j+iss+ikk_xx)%256;
var temp = iS[i]
iS[i] = iS[j]
iS[j] = temp
}
var i = 0
j=0
var result = input
var eles = countElements(input)
for var x = 0 ; x<eles ; x++ {
i = (i+1)%256
var iss = iS[i]
j = (j+iss)%256
var is_i = iS[i]
var is_j = iS[j]
var t = (is_i+is_j)%256
var iY = iS[t]
var ch = (input[x])
var ch_x:Character = Character(ch)
var ch_xx = Int(ch_x.unicodeValue())
var ch_y = ch_xx^iY
var start = advance(result.startIndex, x)
var end = advance(start,1);
let range = Range(start:start, end:end)
var maybestring = String(UnicodeScalar(ch_y))
result = result.stringByReplacingCharactersInRange(range, withString:maybestring)
}
return result;
}
I have tried to implement it so it looks as much as the objective-c version as possible.
This however gives me these horrible times, using -O
100 runs: 0.5 seconds
EDIT
Code should now run in xcode 6.1 using the extension methods I posted.
I run it from terminal like this:
xcrun swiftc -O Swift.swift -o swift
where Swift.swift is my file, and swift is my executable

Usually claims of speed don't really apply to encryption algorithms, they are more for what I usually call "business logic". The functions on bits, bytes, 16/32/64 bit words etc. are usually difficult to optimize. Basically encryption algorithms are designed to be dense operations on these data structures with relatively few choices that can be optimized away.
Take for instance Java. Although infinitely faster than most interpreted languages it really doesn't compare well with C/C++, let alone with assembly optimized encryption algorithms. The same goes for most relatively small algebraic problems.
To make things faster you should at least use explicit numeric types for your numbers.

After excessive testing of the code, i have narrowed it down to what is making my times ultra slow.
If i comment out this code, so that the iK array just contains its initial value. i go from a runtime of 5 seconds to 1 second. Which is a significant increase.
for var i = 0; i < 256 ; i++ {
var c = key[i%keyLength]
iK[i] = c;
}
The problem is with this part:
var c = key[i%keyLength]
There is no "characterAtIndex(int)" method in Swift, therefore i do this as a workaround to get the characterAtIndex. I do it using my extension:
extension String {
subscript (i: Int) -> String {
return String(Array(self)[i])
}
}
But essentially it is the same as this:
var c = Array(key)[i%keyLength]
Instead of the O(1) - (constant time) of this operation in objective-c, we are getting a running time of O(n).

Related

I am trying to convert Java byte[] to Objective-c

I'm trying to convert this objective-c code. I want to create the same value as Java.
long value = 165500000;
for (int i = 8; i-- > 0; value >>= 8) {
data[i] = (byte) value;
}
The code below is the code I converted to Objective-c. But the data is different from Java.
How do I get the same data as Java?
long time = 165500000;
char cData[8];
for (int i = 8; i-- > 0; time >>= 8) {
cData[i] = (char)time;
}

heap corruption when using pin_ptr to copy from native code to managed code

I am trying to copy unsigned short from native code to managed code, but I get a heap corruption when calling memcpy.
INPUT: unsigned short* input
OUTPUT: array<unsigned short> output
I have the following code and if I set testDataSize is 100 then I don't see corruption.
Could someone please shed some light ?
Thanks,
typedef unsigned short uns16;
// DLL Entry Point
void main()
{
int testDataSize = 600;
int frSize = testDataSize / 2;
for (int j = 0; j < 1; j++)
{
uns16* input;
array<uns16>^ output1;
array<uns16>^ output2;
input = new uns16(frSize);
output1 = gcnew array <uns16>(frSize);
output2 = gcnew array <uns16>(frSize);
// initialize
for (int i = 0; i < frSize; i++)
{
input[i] = i;
}
//test 1
Stopwatch^ sw1 = Stopwatch::StartNew();
//-------------------------------------------------------------------
array<short>^ frameDataSigned = gcnew array<short>(frSize);
Marshal::Copy(IntPtr((void*)(input)), frameDataSigned, 0, frameDataSigned->Length);
System::Buffer::BlockCopy(frameDataSigned, 0, output1, 0, (Int32)(frSize) * 2);
//-------------------------------------------------------------------
auto res1 = sw1->ElapsedTicks;
//test 2
Stopwatch^ sw2 = Stopwatch::StartNew();
//-------------------------------------------------------------------
cli::pin_ptr<uns16> pinnedManagedData = &output2[0];
memcpy(pinnedManagedData, (void*)(input), frSize * sizeof(uns16));
//-------------------------------------------------------------------
auto res2 = sw2->ElapsedTicks;
....
int frSize = 300;
input = new uns16(frSize);
This doesn't allocate an array. It allocates a single uint16_t, and sets its value to 300. You need to use square brackets to allocate an array.
input = new uns16[frSize];

Microsoft Solver Foundation - Results Are Not Good Enough ( Stock Portfolio Optimization )

I'm trying to code a markowitz optimization class in C# but the optimization results not good enough. Portfolio weights are differing from matlab's and excel's solutions by average 0.2%. I checked my covariance matrix calculation and the other calculations and find that they are true. Is there a way to calibrate model's tolerance or something other for getting better results? There is my code.
public List<OptimalWeight> CalcOptimalWeights(bool isNegativeAllowed,string method)
{
List<OptimalWeight> weightsResult = new List<OptimalWeight>();
List<List<CovarItem>> covariances = new List<List<CovarItem>>();
covariances = CalcCovariances();
int n = this.AssetReturnList.Count();
SolverContext solver = SolverContext.GetContext();
Model model = solver.CreateModel();
if(isNegativeAllowed == false)
{
Decision[] weights = new Decision[n];
for (int i = 0; i < n; i++)
{
model.AddDecision(weights[i] = new Decision(Domain.RealNonnegative, null));
}
model.AddConstraint("SumWeights", Model.Sum(weights) == 1);
if(this.Constraints.Count() == 0)
{
if (method == "MinVar")
{
Term portVar = 0.0;
for (int i = 0; i < n; i++)
{
for (int j = 0; j < n; j++)
{
portVar += weights[j] * covariances[j][i].Covar * weights[i];
}
}
model.AddGoal("MinVarPort", GoalKind.Minimize, portVar);
Solution solution = solver.Solve();
var report = solution.GetReport();
var decisions = solution.Decisions;
List<double> d = decisions.Select(x => x.GetDouble()).ToList();
for(int i = 0 ; i < n; i++)
{
weightsResult.Add(new OptimalWeight {AssetId = AssetReturnList[i].AssetId,
Symbol = AssetReturnList[i].Symbol,
Weight = d[i] });
}
double pvar = solution.Goals.First().ToDouble();
}
}
}
return weightsResult;
}

How to get Array of Unique PIDs from CGWindowListCopyWindowInfo in Swift

I am trying to convert the following Objective-C method to Swift 3. The goal is to obtain an array of unique process identifiers (kCGWindowOwnerPID) for all "onscreen" window elements in layer 0, excluding desktop elements.
My Obj-C method uses NSSet to remove duplicate PIDs from an NSArray that is filtered using an NSPredicate
+ (NSArray*)filteredProcessIndentifiers
{
pid_t myPid = [[NSProcessInfo processInfo] processIdentifier];
NSArray *windowList = (id)CGWindowListCopyWindowInfo(kCGWindowListOptionOnScreenOnly
| kCGWindowListExcludeDesktopElements,
kCGNullWindowID);
NSArray *uniquePidArray = [[NSSet setWithArray:[
[(id)windowList filteredArrayUsingPredicate:
[NSPredicate predicateWithFormat:#"(kCGWindowLayer == 0 && kCGWindowOwnerPID != %d)", myPid]]
valueForKey:#"kCGWindowOwnerPID"]]
allObjects];
if (windowList) {
CFRelease(windowList);
}
return uniquePidArray;
}
This Swift 3 example works to get a filtered array of elements (in layer 0 and not myPid), however this test contains all keys, and duplicate PIDs:
/// - returns: Array of WindowInfo dictionaries.
func windowListFiltered() throws -> [AnyObject] {
var windowListArray: CFArray?
let options = CGWindowListOption(arrayLiteral: CGWindowListOption.excludeDesktopElements, CGWindowListOption.optionOnScreenOnly)
let filterPredicate = NSPredicate(format: "(kCGWindowLayer == 0 && kCGWindowOwnerPID != %d)", getpid())
windowListArray = CGWindowListCopyWindowInfo(options, kCGNullWindowID)
let filtered = (windowListArray as NSArray?)?.filtered(using: filterPredicate)
return (filtered as [AnyObject]?)!
}
The windowListFiltered() method result produces:
[{
kCGWindowAlpha = 1;
kCGWindowBounds = {
Height = 436;
Width = 770;
X = 525;
Y = 313;
};
kCGWindowIsOnscreen = 1;
kCGWindowLayer = 0;
kCGWindowMemoryUsage = 1072;
kCGWindowName = Debug;
kCGWindowNumber = 213;
kCGWindowOwnerName = Finder;
kCGWindowOwnerPID = 453;
kCGWindowSharingState = 1;
kCGWindowStoreType = 1;
}, {
kCGWindowAlpha = 1;
kCGWindowBounds = {
Height = 537;
Width = 380;
X = 61;
Y = 354;
};
kCGWindowIsOnscreen = 1;
kCGWindowLayer = 0;
kCGWindowMemoryUsage = 1072;
kCGWindowName = Documents;
kCGWindowNumber = 3416;
kCGWindowOwnerName = Finder;
kCGWindowOwnerPID = 453;
kCGWindowSharingState = 1;
kCGWindowStoreType = 1;
}, {
kCGWindowAlpha = 1;
kCGWindowBounds = {
Height = 22;
Width = 1414;
X = 118;
Y = 28;
};
kCGWindowIsOnscreen = 1;
kCGWindowLayer = 0;
kCGWindowMemoryUsage = 128208;
kCGWindowName = "swift3 - Cannot subscript a value of type [[String:Any]] with an index of type 'String' - Swift 3 - Stack Overflow";
kCGWindowNumber = 7798;
kCGWindowOwnerName = WindowMizer;
kCGWindowOwnerPID = 495;
kCGWindowSharingState = 1;
kCGWindowStoreType = 2;
}]
What I need is an Array like:
[453,495]
I have been able to get part-way there, but I am unable to pull an array of PIDs from the filtered array. This attempt iterates through one array to build a second array, which does work, but there are still duplicate PIDs - which I can eliminate, but I am trying to find the best way to accomplish the original Goal.
func filteredProcessIndentifiers() throws -> [Int] {
var processIds:[Int] = []
var windowListArray: CFArray?
let options = CGWindowListOption(arrayLiteral: CGWindowListOption.excludeDesktopElements, CGWindowListOption.optionOnScreenOnly)
let filterPredicate = NSPredicate(format: "(kCGWindowLayer == 0 && kCGWindowOwnerPID != %d)", getpid())
windowListArray = CGWindowListCopyWindowInfo(options, kCGNullWindowID)
let filtered = (windowListArray as NSArray?)?.filtered(using: filterPredicate) as? [[ String : Any]]
for dict in filtered! {
processIds.append((dict["kCGWindowOwnerPID"] as! Int))
}
return processIds
}
In simplest terms, considering I have the filtered array, I tried to return:
filtered["kCGWindowOwnerPID"]
which won't compile due to error: "Type 'NSArray?' has no subscript members"
I am hoping to create something a little more succinct like the beautiful Objective-C example :-). I'll try again tonight and concentrate on using a Swift3 equivalent to NSSet in order to eliminate duplicates.
Any insight on the best way to obtain an array of unique process identifiers from CGWindowListCopyWindowInfo() would be greatly appreciated.
The for loop inside the function filteredProcessIndentifiers change it to this:
for diction in filtered! {
let test = diction as? Dictionary<String,Any>
if let item = test{
print("testa: \(String(describing: item["kCGWindowOwnerPID"]))")
}
}
Thank you. I've been trying to figure this out for almost two days and coming across your question gave me this idea and it worked! So thank you.

"Binary/Unary operator '++/<' cannot be applied to an operand of type" AND "Use of unresolved identifier '=-'"

I am translating an Obj-C app to Swift and having trouble dealing with some syntax. I believe I have declared the variable types correctly so I don't know why I'm be getting these errors. Maybe some blocks are located incorrectly inside classes/functions when they should be outside or something. I would love it if you could review my code. I'm new to programming so what may be a clear and explicit explanation to you probably will still be vague for me so please show with examples using existing names.
Thanks
"Unary operator '++' cannot be applied to an operand of type 'Int?'"
and
"Binary operator '<' cannot be applied to an operand of type 'Int? and Float'"
and
"Use of unresolved identifier '=-'"
import UIKit
import Foundation
import AVFoundation
let minFramesForFilterToSettle = 10
enum CurrentState {
case statePaused
case stateSampling
}
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate {
let session = AVCaptureSession()
var camera : AVCaptureDevice?
var validFrameCounter: Int = 0
var pulseDetector: PulseDetector!
var filter: Filter!
var currentState = CurrentState.stateSampling // Is this initialized correctly?
override func viewDidLoad() {
super.viewDidLoad()
self.pulseDetector = PulseDetector()
self.filter = Filter()
// TO DO startCameraCapture() // call to un-used function.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
}
let NZEROS = 10
let NPOLES = 10
class Filter {
var xv = [Float](count: NZEROS + 1, repeatedValue: 0)
var yv = [Float](count: NPOLES + 1, repeatedValue: 0)
func processValue(value: Float) -> Float {
let gain: Float = 1.894427025e+01
xv[0] = xv[1]; xv[1] = xv[2]; xv[2] = xv[3]; xv[3] = xv[4]; xv[4] = xv[5]; xv[5] = xv[6]; xv[6] = xv[7]; xv[7] = xv[8]; xv[8] = xv[9]; xv[9] = xv[10]; xv[10] = value / gain;
yv[0] = yv[1]; yv[1] = yv[2]; yv[2] = yv[3]; yv[3] = yv[4]; yv[4] = yv[5]; yv[5] = yv[6]; yv[6] = yv[7]; yv[7] = yv[8]; yv[8] = yv[9]; yv[9] = yv[10];
yv[10] = (xv[10] - xv[0]) + 5 * (xv[2] - xv[8]) + 10 * (xv[6] - xv[4])
+ ( -0.0000000000 * yv[0]) + ( 0.0357796363 * yv[1])
+ ( -0.1476158522 * yv[2]) + ( 0.3992561394 * yv[3])
+ ( -1.1743136181 * yv[4]) + ( 2.4692165842 * yv[5])
+ ( -3.3820859632 * yv[6]) + ( 3.9628972812 * yv[7])
+ ( -4.3832594900 * yv[8]) + ( 3.2101976096 * yv[9]);
return yv[10];
}
}
let maxPeriod = 1.5 // float?
let minPeriod = 0.1 // float?
let invalidEntry:Double = -11
let maxPeriodsToStore:Int = 20
let averageSize:Float = 20
class PulseDetector {
var upVals: [Float] = [averageSize]
var downVals: [Float] = [averageSize]
var upValIndex: Int?
var downValIndex: Int?
var lastVal: Float?
var periodStart: Float?
var periods: [Double] = []
var periodTimes: [Double] = []
var periodIndex: Int?
var started: Bool?
var freq: Float?
var average: Float?
var wasDown: Bool?
func reset() {
for var i=0; i < maxPeriodsToStore; i++ {
periods[i] = invalidEntry
}
for var i=0; i < averageSize; i++ { // why error when PulseDetector.h said averageSize was an Int?
upVals[i] = invalidEntry
downVals[i] = invalidEntry
}
freq = 0.5
periodIndex = 0
downValIndex = 0
upValIndex = 0
}
func addNewValue(newVal:Float, atTime:Double) -> Float {
// we keep track of the number of values above and below zero
if newVal > 0 {
upVals[upValIndex!] = newVal
upValIndex++
if upValIndex >= averageSize {
upValIndex = 0
}
}
if newVal < 0 {
downVals[downValIndex] =- newVal
downValIndex++
if downValIndex >= averageSize {
downValIndex = 0
}
}
// work out the average value above zero
var count: Float
var total: Float
for var i=0; i < averageSize; i++ {
if upVals[i] != invalidEntry {
count++
total+=upVals[i]
}
}
var averageUp = total/count
// and the average value below zero
count=0;
total=0;
for var i=0; i < averageSize; i++ {
if downVals[i] != invalidEntry {
count++
total+=downVals[i]
}
}
var averageDown = total/count
// is the new value a down value?
if newVal < (-0.5*averageDown) {
wasDown = true
}
// original Objective-C code
PulseDetector.h
#import <Foundation/Foundation.h>
#define MAX_PERIODS_TO_STORE 20 // is this an Int?
#define AVERAGE_SIZE 20 // is this a Float?
#define INVALID_PULSE_PERIOD -1 // done
#interface PulseDetector : NSObject {
float upVals[AVERAGE_SIZE];
float downVals[AVERAGE_SIZE];
int upValIndex;
int downValIndex;
float lastVal;
float periodStart;
double periods[MAX_PERIODS_TO_STORE]; // this is an array!
double periodTimes[MAX_PERIODS_TO_STORE]; // this is an rray !!
int periodIndex;
bool started;
float freq;
float average;
bool wasDown;
}
#property (nonatomic, assign) float periodStart; // var periodStart = float?
-(float) addNewValue:(float) newVal atTime:(double) time; // declaring a method called addNewValue with 2 arguments called atTime and time that returns a float
-(float) getAverage; // declaring a method called getAverage that returns a float
-(void) reset; // declaring a method that returns nothing
#end
PulseDetector.m
#import <QuartzCore/QuartzCore.h>
#import "PulseDetector.h"
#import <vector>
#import <algorithm>
#define MAX_PERIOD 1.5
#define MIN_PERIOD 0.1
#define INVALID_ENTRY -100 // is this a double?
#implementation PulseDetector
#synthesize periodStart;
- (id) init
{
self = [super init];
if (self != nil) {
// set everything to invalid
[self reset];
}
return self;
}
-(void) reset {
for(int i=0; i<MAX_PERIODS_TO_STORE; i++) {
periods[i]=INVALID_ENTRY;
}
for(int i=0; i<AVERAGE_SIZE; i++) {
upVals[i]=INVALID_ENTRY;
downVals[i]=INVALID_ENTRY;
}
freq=0.5;
periodIndex=0;
downValIndex=0;
upValIndex=0;
}
-(float) addNewValue:(float) newVal atTime:(double) time {
// we keep track of the number of values above and below zero
if(newVal>0) {
upVals[upValIndex]=newVal;
upValIndex++;
if(upValIndex>=AVERAGE_SIZE) {
upValIndex=0;
}
}
if(newVal<0) {
downVals[downValIndex]=-newVal;
downValIndex++;
if(downValIndex>=AVERAGE_SIZE) {
downValIndex=0;
}
}
// work out the average value above zero
float count=0;
float total=0;
for(int i=0; i<AVERAGE_SIZE; i++) {
if(upVals[i]!=INVALID_ENTRY) {
count++;
total+=upVals[i];
}
}
float averageUp=total/count;
// and the average value below zero
count=0;
total=0;
for(int i=0; i<AVERAGE_SIZE; i++) {
if(downVals[i]!=INVALID_ENTRY) {
count++;
total+=downVals[i];
}
}
float averageDown=total/count;
// is the new value a down value?
if(newVal<-0.5*averageDown) {
wasDown=true;
}
// is the new value an up value and were we previously in the down state?
if(newVal>=0.5*averageUp && wasDown) {
wasDown=false;
// work out the difference between now and the last time this happenned
if(time-periodStart<MAX_PERIOD && time-periodStart>MIN_PERIOD) {
periods[periodIndex]=time-periodStart;
periodTimes[periodIndex]=time;
periodIndex++;
if(periodIndex>=MAX_PERIODS_TO_STORE) {
periodIndex=0;
}
}
// track when the transition happened
periodStart=time;
}
// return up or down
if(newVal<-0.5*averageDown) {
return -1;
} else if(newVal>0.5*averageUp) {
return 1;
}
return 0;
}
-(float) getAverage {
double time=CACurrentMediaTime();
double total=0;
double count=0;
for(int i=0; i<MAX_PERIODS_TO_STORE; i++) {
// only use upto 10 seconds worth of data
if(periods[i]!=INVALID_ENTRY && time-periodTimes[i]<10) {
count++;
total+=periods[i];
}
}
// do we have enough values?
if(count>2) {
return total/count;
}
return INVALID_PULSE_PERIOD;
}
#end
Your problem is that you didn't copied the defines:
#define MAX_PERIODS_TO_STORE 20 // is this an Int?
#define AVERAGE_SIZE 20 // is this a Float?
#define INVALID_PULSE_PERIOD -1 // done
You have to change your defines so they work in your Swift code.
Check this answer how to replace the Objective-C #define to make Swift-Workable.
Also you could just change the defines to variables and initialize your variables with them.
First, a bit on optionals. Variables that end with a '?' are Optional, meaning that they are allowed to be nil (basically not exist). The compiler will not know at compile time whether this variable exists or not, because you are allowed to set it to nil.
"Unary operator '++' cannot be applied to an operand of type 'Int?'"
You seem to have read that last word as Int, but it is Int? which is significant. Basically, since it is an optional (as indicated by the question mark), the compiler knows it can be nil. You cannot use ++ on nil, and since optionals can be nil, you cannot use ++ on optionals. You must forcibly unwrap it first:
downValIndex!++ //note the exclamation point for unwrapping
"Use of unresolved identifier '=-'"
=- isnt a thing. -= is a thing. So
downVals[downValIndex] -= newVal
downVals[downValIndex] = downVals[downValIndex]-newVal //equivalent to above
"Binary operator '>=' cannot be applied to an operand of type 'Int? and Float'"
The compiler thinks you have an optional int on the left of the < and a Float on the right. Assuming you want two Ints, you must unwrap the left and make sure the right is cast to be an int (something like this). If you want two floats instead, cast or define as floats instead of ints.
if downValIndex! >= averageSize as! Int { //casting to Int
You should just be defining averageSize as an int though
var averageSize:Int = 10 //or whatever number
Also, you have lots of optionals. If any of them can be defined to something at compile time, it will make your life easier as you won't need to unwrap them everywhere. Alternately you could implicitly unwrap them (only do this if you are absolutely sure they will never be nil).
var implicitlyUnwrappedOptional:Int!