I've got a method that generates random strings:
def generate_letters(length)
chars = 'ABCDEFGHJKLMNOPQRSTUVWXYZ'
letters = ''
length.times { |i| letters << chars[rand(chars.length)] }
letters
end
I want to map values to generated strings, e.g.(1):
A = 1, B = 2, C = 3 , e.g.(2):
if I generate ACB it equals to 132. Any suggestions?
You can use that for concatenating these values:
s = 'ACB'
puts s.chars.map{ |c| c.ord - 'A'.ord + 10 }.join.to_i
# => 101211
and to sum them instead use Enumerable#inject method (see docs, there are some nice examples):
s.chars.inject(0) { |r, c| r + (c.ord - 'A'.ord + 10) } # => 33
or Enumerable#sum if you're doing it inside Rails:
s.chars.sum { |c| c.ord - 'A'.ord + 10 } # => 33
How would you deal with the ambiguity for leters above 10 (J) ?
For example, how would you differentiate between BKC=2113 and BAAC=2113?
Disregarding this problem you can do this:
def string_to_funny_number(str)
number=''
str.each_byte{|char_value| number << (1 + char_value - 'A'.ord).to_s}
return number.to_i
end
This function will generate a correct int by concatenating each letter value (A=1,B=2,...)
Beware that this function doesn't sanitize input, as i am assuming you are using it with output from other function.
Related
i am new to programming and starting up with kotlin . i've been stuck on this problem for a few days and i really need some assistance . i am trying to read a user input of Strings in a format like this 3 + 2 + 1, go through the String and wherever there is an operator , add up the numbers before and after the operator sign . so the above 3 + 2 + 1 should output 6.
Here's a snippet of my code
fun main() {
val userInput = readLine()!!.split(" ")
var sum = 0
for (i in 0 until userInput.size) {
if (userInput.get(i) == "+"){
sum += userInput.get(i-1).toInt() + userInput.get(i+1).toInt()
}
}
println(sum )
}
my code works until the point of adding up the numbers . it repeats the next number after the operator , so using the above example of 3 + 2 + 1 it outputs 8 thus 3 + 2 + 2 + 1. I'm so confused and don't know how to go about this .
Try not to increment the sum value each time, but rewrite the last number which was participated in sum. Just like that:
You have the case: 1 + 2 + 3 + 4
Split them
Now you have the array [1, +, 2, +, 3, +, 4]
Then you iterate this array, stuck with the first plus and sum the values.
Rewrite the second summed value with your sum.
Now you have new array [1, +, 3, +, 3, +, 4]
At the end of the loop, you will have this array [1, +, 3, +, 6, +, 10]
And your sum is the last element of the array
The logic of your code is that for each "+" encountered, it adds the sum of the numbers left and right of the "+" to the sum. For the example "1 + 2 + 3", here's what is happening:
Starting sum is 0.
At first "+", add 1 + 2 to sum, so now sum is 3.
At second "+", add 2 + 3 to sum, so now the sum is 3 + 5 = 8.
So you are adding all the middle numbers to the total twice, because they each appear next to two operators.
One way to do this is start with the first number as your sum. Then add only the number to the right of each "+", so numbers are only counted once.
fun main() {
val userInput = readLine()!!.split(" ")
var sum = userInput[0].toInt()
for (i in userInput.indices) {
if (userInput[i] == "+") {
sum += userInput[i + 1].toInt()
}
}
println(sum)
}
sum += userInput.get(i-1).toInt() + userInput.get(i+1).toInt()
This is only valid for the first iteration, so if the user puts 1 + 2 + 3.
So
userInput[0] is 1
userInput[1] is +...
the first time that line will be triggered sum will be 1 + 2, and that's quite fine, but the second time, in the second +, you will sum i-1 (which is 2 and was in the total sum already) and i+1, that will be 3, so you are doing 1+2+2+3.
You need to understand why this is happening and think of another way to implement it.
Check this is working for me
var sum = 0
readLine()?.split(" ")?.filter { it.toIntOrNull() != null }?.map { sum += it.toInt() }
println(sum)
In a column I have values like 0.7,0.85, 0.45, etc but also it might happen to have 2.13 which is different than the majority of the values. How can I spotted this "outliers"?
Thank you
Call scipy.stats.zscore(a) with a as a DataFrame to get a NumPy array containing the z-score of each value in a. Call numpy.abs(x) with x as the previous result to convert each element in x to its absolute value. Use the syntax (array < 3).all(axis=1) with array as the previous result to create a boolean array. Filter the original DataFrame with this result.
z_scores = stats.zscore(df)
abs_z_scores = np.abs(z_scores)
filtered_entries = (abs_z_scores < 3).all(axis=1)
new_df = df[filtered_entries]
You could get the standard deviation and mean of the set and remove anything more than X (say 2) standard deviations from the mean?
The following would calculate the standard deviation
public static double StdDev(this IEnumerable<double> values)
{
double ret = 0;
if (values.Count() > 1)
{
double avg = values.Average();
double sum = values.Sum(d => Math.Pow(d - avg, 2));
ret = Math.Sqrt((sum) / (values.Count() - 1));
}
return ret;
}
Suppose x,y,z are int variables and A is a matrix, I want to express a constraint like:
z == A[x][y]
However this leads to an error:
TypeError: object cannot be interpreted as an index
What would be the correct way to do this?
=======================
A specific example:
I want to select 2 items with the best combination score,
where the score is given by the value of each item and a bonus on the selection pair.
For example,
for 3 items: a, b, c with related value [1,2,1], and the bonus on pairs (a,b) = 2, (a,c)=5, (b,c) = 3, the best selection is (a,c), because it has the highest score: 1 + 1 + 5 = 7.
My question is how to represent the constraint of selection bonus.
Suppose CHOICE[0] and CHOICE[1] are the selection variables and B is the bonus variable.
The ideal constraint should be:
B = bonus[CHOICE[0]][CHOICE[1]]
but it results in TypeError: object cannot be interpreted as an index
I know another way is to use a nested for to instantiate first the CHOICE, then represent B, but this is really inefficient for large quantity of data.
Could any expert suggest me a better solution please?
If someone wants to play a toy example, here's the code:
from z3 import *
items = [0,1,2]
value = [1,2,1]
bonus = [[1,2,5],
[2,1,3],
[5,3,1]]
choices = [0,1]
# selection score
SCORE = [ Int('SCORE_%s' % i) for i in choices ]
# bonus
B = Int('B')
# final score
metric = Int('metric')
# selection variable
CHOICE = [ Int('CHOICE_%s' % i) for i in choices ]
# variable domain
domain_choice = [ And(0 <= CHOICE[i], CHOICE[i] < len(items)) for i in choices ]
# selection implication
constraint_sel = []
for c in choices:
for i in items:
constraint_sel += [Implies(CHOICE[c] == i, SCORE[c] == value[i])]
# choice not the same
constraint_neq = [CHOICE[0] != CHOICE[1]]
# bonus constraint. uncomment it to see the issue
# constraint_b = [B == bonus[val(CHOICE[0])][val(CHOICE[1])]]
# metric definition
constraint_sumscore = [metric == sum([SCORE[i] for i in choices ]) + B]
constraints = constraint_sumscore + constraint_sel + domain_choice + constraint_neq + constraint_b
opt = Optimize()
opt.add(constraints)
opt.maximize(metric)
s = []
if opt.check() == sat:
m = opt.model()
print [ m.evaluate(CHOICE[i]) for i in choices ]
print m.evaluate(metric)
else:
print "failed to solve"
Turns out the best way to deal with this problem is to actually not use arrays at all, but simply create integer variables. With this method, the 317x317 item problem originally posted actually gets solved in about 40 seconds on my relatively old computer:
[ 0.01s] Data loaded
[ 2.06s] Variables defined
[37.90s] Constraints added
[38.95s] Solved:
c0 = 19
c1 = 99
maxVal = 27
Note that the actual "solution" is found in about a second! But adding all the required constraints takes the bulk of the 40 seconds spent. Here's the encoding:
from z3 import *
import sys
import json
import sys
import time
start = time.time()
def tprint(s):
global start
now = time.time()
etime = now - start
print "[%ss] %s" % ('{0:5.2f}'.format(etime), s)
# load data
with open('data.json') as data_file:
dic = json.load(data_file)
tprint("Data loaded")
items = dic['items']
valueVals = dic['value']
bonusVals = dic['bonusVals']
vals = [[Int("val_%d_%d" % (i, j)) for j in items if j > i] for i in items]
tprint("Variables defined")
opt = Optimize()
for i in items:
for j in items:
if j > i:
opt.add(vals[i][j-i-1] == valueVals[i] + valueVals[j] + bonusVals[i][j])
c0, c1 = Ints('c0 c1')
maxVal = Int('maxVal')
opt.add(Or([Or([And(c0 == i, c1 == j, maxVal == vals[i][j-i-1]) for j in items if j > i]) for i in items]))
tprint("Constraints added")
opt.maximize(maxVal)
r = opt.check ()
if r == unsat or r == unknown:
raise Z3Exception("Failed")
tprint("Solved:")
m = opt.model()
print " c0 = %s" % m[c0]
print " c1 = %s" % m[c1]
print " maxVal = %s" % m[maxVal]
I think this is as fast as it'll get with Z3 for this problem. Of course, if you want to maximize multiple metrics, then you can probably structure the code so that you can reuse most of the constraints, thus amortizing the cost of constructing the model just once, and incrementally optimizing afterwards for optimal performance.
I decide to modify the following while loop and use it inside a function so that the loop can take any value instead of 6.
i = 0
numbers = []
while i < 6:
numbers.append(i)
i += 1
I created the following script so that I can use the variable(or more specifically argument ) instead of 6 .
def numbers(limit):
i = 0
numbers = []
while i < limit:
numbers.append(i)
i = i + 1
print numbers
user_limit = raw_input("Give me a limit ")
numbers(user_limit)
When I didn't use the raw_input() and simply put the arguments from the script it was working fine but now when I run it(in Microsoft Powershell) a cursor blinks continuously after the question in raw_input() is asked. Then i have to hit CTRL + C to abort it. Maybe the function is not getting called after raw_input().
Now it is giving a memory error like in the pic.
You need to convert user_limit to Int:
raw_input() return value is str and the statement is using i which is int
def numbers(limit):
i = 0
numbers = []
while i < limit:
numbers.append(i)
i = i + 1
print numbers
user_limit = int(raw_input("Give me a limit "))
numbers(user_limit)
Output:
Give me a limit 8
[0, 1, 2, 3, 4, 5, 6, 7]
I need to find whether a number is divisible by 3 without using %, / or *. The hint given was to use atoi() function. Any idea how to do it?
The current answers all focus on decimal digits, when applying the "add all digits and see if that divides by 3". That trick actually works in hex as well; e.g. 0x12 can be divided by 3 because 0x1 + 0x2 = 0x3. And "converting" to hex is a lot easier than converting to decimal.
Pseudo-code:
int reduce(int i) {
if (i > 0x10)
return reduce((i >> 4) + (i & 0x0F)); // Reduces 0x102 to 0x12 to 0x3.
else
return i; // Done.
}
bool isDiv3(int i) {
i = reduce(i);
return i==0 || i==3 || i==6 || i==9 || i==0xC || i == 0xF;
}
[edit]
Inspired by R, a faster version (O log log N):
int reduce(unsigned i) {
if (i >= 6)
return reduce((i >> 2) + (i & 0x03));
else
return i; // Done.
}
bool isDiv3(unsigned i) {
// Do a few big shifts first before recursing.
i = (i >> 16) + (i & 0xFFFF);
i = (i >> 8) + (i & 0xFF);
i = (i >> 4) + (i & 0xF);
// Because of additive overflow, it's possible that i > 0x10 here. No big deal.
i = reduce(i);
return i==0 || i==3;
}
Subtract 3 until you either
a) hit 0 - number was divisible by 3
b) get a number less than 0 - number wasn't divisible
-- edited version to fix noted problems
while n > 0:
n -= 3
while n < 0:
n += 3
return n == 0
Split the number into digits. Add the digits together. Repeat until you have only one digit left. If that digit is 3, 6, or 9, the number is divisible by 3. (And don't forget to handle 0 as a special case).
While the technique of converting to a string and then adding the decimal digits together is elegant, it either requires division or is inefficient in the conversion-to-a-string step. Is there a way to apply the idea directly to a binary number, without first converting to a string of decimal digits?
It turns out, there is:
Given a binary number, the sum of its odd bits minus the sum of its even bits is divisible by 3 iff the original number was divisible by 3.
As an example: take the number 3726, which is divisible by 3. In binary, this is 111010001110. So we take the odd digits, starting from the right and moving left, which are [1, 1, 0, 1, 1, 1]; the sum of these is 5. The even bits are [0, 1, 0, 0, 0, 1]; the sum of these is 2. 5 - 2 = 3, from which we can conclude that the original number is divisible by 3.
A number divisible by 3, iirc has a characteristic that the sum of its digit is divisible by 3. For example,
12 -> 1 + 2 = 3
144 -> 1 + 4 + 4 = 9
The interview question essentially asks you to come up with (or have already known) the divisibility rule shorthand with 3 as the divisor.
One of the divisibility rule for 3 is as follows:
Take any number and add together each digit in the number. Then take that sum and determine if it is divisible by 3 (repeating the same procedure as necessary). If the final number is divisible by 3, then the original number is divisible by 3.
Example:
16,499,205,854,376
=> 1+6+4+9+9+2+0+5+8+5+4+3+7+6 sums to 69
=> 6 + 9 = 15 => 1 + 5 = 6, which is clearly divisible by 3.
See also
Wikipedia/Divisibility rule - has many rules for many divisors
Given a number x.
Convert x to a string. Parse the string character by character. Convert each parsed character to a number (using atoi()) and add up all these numbers into a new number y.
Repeat the process until your final resultant number is one digit long. If that one digit is either 3,6 or 9, the origional number x is divisible by 3.
My solution in Java only works for 32-bit unsigned ints.
static boolean isDivisibleBy3(int n) {
int x = n;
x = (x >>> 16) + (x & 0xffff); // max 0x0001fffe
x = (x >>> 8) + (x & 0x00ff); // max 0x02fd
x = (x >>> 4) + (x & 0x000f); // max 0x003d (for 0x02ef)
x = (x >>> 4) + (x & 0x000f); // max 0x0011 (for 0x002f)
return ((011111111111 >> x) & 1) != 0;
}
It first reduces the number down to a number less than 32. The last step checks for divisibility by shifting the mask the appropriate number of times to the right.
You didn't tag this C, but since you mentioned atoi, I'm going to give a C solution:
int isdiv3(int x)
{
div_t d = div(x, 3);
return !d.rem;
}
bool isDiv3(unsigned int n)
{
unsigned int n_div_3 =
n * (unsigned int) 0xaaaaaaab;
return (n_div_3 < 0x55555556);//<=>n_div_3 <= 0x55555555
/*
because 3 * 0xaaaaaaab == 0x200000001 and
(uint32_t) 0x200000001 == 1
*/
}
bool isDiv5(unsigned int n)
{
unsigned int n_div_5 =
i * (unsigned int) 0xcccccccd;
return (n_div_5 < 0x33333334);//<=>n_div_5 <= 0x33333333
/*
because 5 * 0xcccccccd == 0x4 0000 0001 and
(uint32_t) 0x400000001 == 1
*/
}
Following the same rule, to obtain the result of divisibility test by 'n', we can :
multiply the number by 0x1 0000 0000 - (1/n)*0xFFFFFFFF
compare to (1/n) * 0xFFFFFFFF
The counterpart is that for some values, the test won't be able to return a correct result for all the 32bit numbers you want to test, for example, with divisibility by 7 :
we got 0x100000000- (1/n)*0xFFFFFFFF = 0xDB6DB6DC
and 7 * 0xDB6DB6DC = 0x6 0000 0004,
We will only test one quarter of the values, but we can certainly avoid that with substractions.
Other examples :
11 * 0xE8BA2E8C = A0000 0004, one quarter of the values
17 * 0xF0F0F0F1 = 10 0000 0000 1
comparing to 0xF0F0F0F
Every values !
Etc., we can even test every numbers by combining natural numbers between them.
A number is divisible by 3 if all the digits in the number when added gives a result 3, 6 or 9. For example 3693 is divisible by 3 as 3+6+9+3 = 21 and 2+1=3 and 3 is divisible by 3.
inline bool divisible3(uint32_t x) //inline is not a must, because latest compilers always optimize it as inline.
{
//1431655765 = (2^32 - 1) / 3
//2863311531 = (2^32) - 1431655765
return x * 2863311531u <= 1431655765u;
}
On some compilers this is even faster then regular way: x % 3. Read more here.
well a number is divisible by 3 if all the sum of digits of the number are divisible by 3. so you could get each digit as a substring of the input number and then add them up. you then would repeat this process until there is only a single digit result.
if this is 3, 6 or 9 the number is divisable by 3.
Here is a pseudo-algol i came up with .
Let us follow binary progress of multiples of 3
000 011
000 110
001 001
001 100
001 111
010 010
010 101
011 000
011 011
011 110
100 001
100 100
100 111
101 010
101 101
just have a remark that, for a binary multiple of 3 x=abcdef in following couples abc=(000,011),(001,100),(010,101) cde doest change , hence, my proposed algorithm:
divisible(x):
y = x&7
z = x>>3
if number_of_bits(z)<4
if z=000 or 011 or 110 , return (y==000 or 011 or 110) end
if z=001 or 100 or 111 , return (y==001 or 100 or 111) end
if z=010 or 101 , return (y==010 or 101) end
end
if divisible(z) , return (y==000 or 011 or 110) end
if divisible(z-1) , return (y==001 or 100 or 111) end
if divisible(z-2) , return (y==010 or 101) end
end
C# Solution for checking if a number is divisible by 3
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
int num = 33;
bool flag = false;
while (true)
{
num = num - 7;
if (num == 0)
{
flag = true;
break;
}
else if (num < 0)
{
break;
}
else
{
flag = false;
}
}
if (flag)
Console.WriteLine("Divisible by 3");
else
Console.WriteLine("Not Divisible by 3");
Console.ReadLine();
}
}
}
Here is your optimized solution that every one should know.................
Source: http://www.geeksforgeeks.org/archives/511
#include<stdio.h>
int isMultiple(int n)
{
int o_count = 0;
int e_count = 0;
if(n < 0)
n = -n;
if(n == 0)
return 1;
if(n == 1)
return 0;
while(n)
{
if(n & 1)
o_count++;
n = n>>1;
if(n & 1)
e_count++;
n = n>>1;
}
return isMultiple(abs(o_count - e_count));
}
int main()
{
int num = 23;
if (isMultiple(num))
printf("multiple of 3");
else
printf(" not multiple of 3");
return 0;
}