I have a query that pulls a field IncCode from Table A and arranges it by column headers. 1, 2, 3, 4, 5. That query also pulls an address list as shown below:
Query 1:
Address
1
2
3
4
5
100 Street
A
B
C
D
E
200 Street
A
B
C
D
E
300 Street
A
B
C
D
E
This information is then compared against another query in a crosstab to pull in a different Address based on matching values:
(Crosstab) Query 2:
Address
1
2
3
4
5
400 Street
A
B
C
D
E
500 Street
A
B
C
D
E
600 Street
A
B
C
D
E
The problem is IncCode only outputs a column if there is something to output. If there are no entries for 5, for example, then there is no column 5. When put into a crosstab, which requires me to Select the columns manually, it fails outputting an error. Is there a way to pre-empt this without manually removing the columns that are not present in the crosstab SQL? I realize a Select * would handle pulling in the output regardless of which columns are present, but I need the updated addresses from the crosstab comparison as well. An example of an output like this would incorporate an empty column to represent the field not present as shown in Example 1: Any help would be appreciated.
Example 1 (Column 5 not pulled from Query 1):
Address
1
2
3
4
5
400 Street
A
B
C
D
500 Street
A
B
C
D
600 Street
A
B
C
D
Related
i have a table like
a 1
a 2
b 1
b 3
b 2
b 4
i wanted out put like this
1 2 3 4
a a
b b b b
Number of rows in output may vary.
Pivoting is not working as it is in exasol, and case cant work as it is dynamic
I am a new bee to python and Pandas, I have a huge data set and insted of applying function row by row I want to apply to a batch of rows and return back the result and associated back to the same corresponding row back
Example:
ID Values
a 2
b 3
c 4
d 5
e 6
f 7
df['squared_values']= df['values'].apply(lambda row: function(row))
def function(x):
#making call to api and returning values related to x
return response
above one apply function row by row which is time consuming
I need a way to do batch operations on row
example:
batch=3
df['squared_values']= df['values'].apply(lambda batch: function(batch))
on first pass values should be
ID Values squared_values
a 2 4
b 3 9
c 4 16
d 5
e 6
f 7
on second pass
ID Values squared_values
a 2 4
b 3 9
c 4 16
d 5 25
e 6 36
f 7 49
Is this operation really too slow?
df['squared_values'] = df['Values'] ** 2
you can always add the iloc to select rows:
df.iloc['squared_values'].update(df.iloc[0:4]['Values'] ** 2)
But I can't imagine this being quicker
I am trying to get those rows from the table which is corresponding to the selective indexes. For example, i have one xls file in which different columns of data. currently my code search the selective two columns and their indexes also, know i want to search those selective rows corresponding elements which is in different rows.
Lets A B C D E F G are columns name in which 1000 of rows of numbers
like
A B c D E F G
1 3 4 5 6 3 3
3 4 5 6 3 2 7
.............
4 7 3 2 5 3 2
So Currently my code search two specific columns (lets suppose B and F selective values which is in some range), now i want to search column A value which is present in those selective ranges.
B F A
3 4 5
3 5 3
7 7 3
5 4 6
...
like this
This is my current code VI
I hope we've finally gotten to the bottom of it. How about this one?
I have following two tables:
A.
A_ID Amount GL_ID
------------------
1 100 10
2 200 11
3 150 10
4 20 10
5 369 12
6 369 11
7 254 12
B.
B_ID Name GL_ID
-----------------
1 A 10
2 B 10
3 C 11
4 D 11
5 E 12
6 F 12
I want to join these tables. They have GL_ID column in common (ID of another table). Table A store transactions along with GL_ID while table B defines document type (A, B, C, D etc.) with reference to GL_ID.
A & B don't have any common column except GL_ID. I want the following result, relevant document type (A, B, C, D etc.) for each transaction in table A.
A.A_ID A.Amount B.Name
-----------------------
1 100 A
2 200 B
3 150 B
4 20 B
5 369 A
6 369 D
7 254 D
But when I apply to join (LEFT, RIGHT, FULL JOIN) keyword, query shows repeated values. But I only want to have relevant Doc Type for each line in table A.
try this.
select distinct A.A_ID, A.Amount, B.Name
from A inner join B on A.GL_ID=B.GL_ID
I have a dataset final with variable A B C D E of which A,B,C are numeric and D,E are character.I want to delete the data from dataset final and populate it with new data from dataset One and dataset Two.
Dataset One has variables A B C and dataset Two has D and E.
Example:
FINAL
A B C D E
1 2 3 a b
4 5 6 c d
I want to delete the old content.Then IT should look like
FINAL
A B C D E
I have data set One and Two as
One
A B C
0 2 4
1 2 3
7 6 4
Two
D E
x y
p q
I want to update FINAL with One and Two content like
FINAL
A B C D E
0 2 4 x y
1 2 3 p q
7 6 4
I think you want merge (documented here):
data final;
merge one two;
run;
This is a bit more painful using proc sql.