Situation
I'm preparing the migration of user data and I have a list of user subscriptions, for which I try to give every user a member id.
We only want to migrate active subscriptions, which are identified with the value 1 in the row "active". The oldest user should get the lowest number and count upwards from there.
Problem
The oldest users already have renewed their subscription. Please have a look at this image:
If I order the dataset by date and set a sequential number from there, the oldest user doesn't get the lowest number since we only migrate active subscriptions. In the above image, the oldest user "one" should have the id 1 but gets the id 5 under my current setting.
Possible Solution
I'm struggling to find a solution to solve this problem. I was thinking about finding a way to write:
a) sort by date and set an ongoing member id
b) check for each user mail adress with active = 1, if there is already an entry existing with active = 0 and if yes, overwrite the member id.
afterwards, it should look like this:
Create sample dataset:
df = pd.DataFrame({
"member_id":[1,2,3,4,5,6],
"active":[0,0,1,1,1,1],
"date": ["Jan 2020","Feb 2020","Mar 2020","Apr 2020","Jan 2021","Feb 2021"],
"mail": ["one#user.com","two#user.com","three#user.com","four#user.com","one#user.com","two#user.com"]})
Then find unique users and their ids:
# Change date column to datetime
df["date"] = pd.to_datetime(df["date"])
# Sort rows by active and date columns
df = df.sort_values(by=["active","date"])
# Find unique users by order of appearance
users = df["mail"].unique()
# Find id of each user
users2id = {u:i+1 for i, u in enumerate(users)}
# Update id
df["member_id"] = df["mail"].apply(lambda u: users2id[u])
Related
This is the description of my problem on this page
trying to get data like this didnt work for me :
tod_session_req = await session.execute(
select(
Users.firstname
).join(
Users, Users.id == PrivateTods.fu_id
).outerjoin(
Blocks, Users.id == Blocks.blocker
).where(
Blocks.blocked != user.id, # other user didnt block current
)
)
tod_session = tod_session_req.fetchone()
I could use raw queries or other method or any help. thanks alot
In my bot,
If a user wants to play, the bot must check the Tods table to see if there is a row where Tods.su_id value is None.
We consider the Tods.su_id that wants to be connected to the Tods.fu_id to be 100
If there was a colum Tods.su_id with None value, the bot should check the two conditions in the Blocks table
One is whether the Blocks.blocked column is equal to 100 in the rows where the Blocks.blocker column is equal to Tods.fu_id or not. (Did the user Users.fu_id block the user with ID 100 or not).
if user 100 wasnt blocked, go to the next condition.
If it was: read the last two lines
The next condition is :
In the Blocks Table rows, where the Blocks.blocker column is equal to 100, is the blocked column equal to Users.fu_id or not? (Did the user 100 block user Users.fu_id or not).
If not, select that row and give it to me
If it was blocked, continue browsing so that there is no user who can play with it, and in that case it will be added to the tod table because this section is related to Python, not the database.
I have the following question!
I have a table like this:
Data Source
I want to create a field(i suppose it's a field) that i can take the apl_ids,
that have as service_offered some that i want.
Example from the above table. If i want the apl_ids that have ONLY the service_offered
Pending 1, Pending 2 and Pending 7.
In that case, I want to get the apl_id = "13" since apl_id = "12" got one more service that i don't need.
Which is the best way to get that?
Thank you in advance!
Add a calculated field which gives 1 for desired values and 0 for other values. Add another calc field with fixed LOD to apl_id to sum of calcF1. Filter all ids with values=3 only. I think that should work.
Else tell me I will post screenshots
You can create a set based on the field api_id defined by the condition
max([service_offering]=“Pending 1”) and
max([service_offering]=“Pending 2”) and
max([service_offering]=“Pending 7”) and
min([service_offering]=“Pending 1” or [service_offering]=“Pending 2” or [service_offering]=“Pending 7”)
This set will contain those api_ids that have at least one record where service_offering is “Pending 1” and at least one record with Pending 2 ... and where every record has a service offering of 1, 2 or 7 (I.e. no others)
The key is to realize that Tableau treats True as greater than False, so min() and max() for boolean expressions correspond to every() and any().
Once you have a set of api_ids() you can use it on shelves and in calculated fields in many different ways.
I am trying to automatically count the unique occurrences of a string saved in the table. Currently I have a count of a string but only when a user selects the string and it gives every record the same count value.
For example
Below is a image of my current table:
From the image you can see that there is a Requirement column and a count column. I have got it to the point were when the user would select a requirement record (each requirement record has a link) it would insert the requirement text into a requirement item called 'P33_REQUIREMENT' so the count can have a value to compare to.
This is the SQL that I have at current:
SELECT (SELECT COUNT(*)
FROM DIA_ASSOCIATED_QMS_DOCUMENTS
WHERE REQUIREMENT = :P33_REQUIREMENT
group by REQUIREMENT
) AS COUNT,
DPD.DIA_SELECTED,
DPD.Q_NUMBER_SELECTED,
DPD.SECTION_SELECTED,
DPD.ASSIGNED_TO_PERSON,
DAQD.REFERENCE,
DAQD.REQUIREMENT,
DAQD.PROGRESS,
DAQD.ACTION_DUE_DATE,
DAQD.COMPLETION_DATE,
DAQD.DIA_REF,
DA.DIA,
DA.ORG_RISK_SCORE
FROM DIA_PROPOSED_DETAIL DPD,
DIA_ASSOCIATED_QMS_DOCUMENTS DAQD,
DIA_ASSESSMENTS DA
WHERE DPD.DIA_SELECTED = DAQD.DIA_REF
AND DPD.DIA_SELECTED = DA.DIA
This is the sql used to make the table in the image.
This issue with this is, it is giving every record the same count when the user selects a requirement value. I can kind of fix this by also adding in AND DIA_SELECTED = :P33_DIA into the where clause of the count. DIA_SELECTED being the first column in the table and :P33_DIA being the item that stores the DIA ref number relating to the record chosen.
The output of this looks like:
As you can see there is only one count. Still doesn't fix the entire issue but a bit better.
So to sum up is there a way to have the count, count the occurrences individually and insert them in the requirements that are the same. So if there are three tests like in the images there would be a '3' in the count column where requirement = 'test', and if there is one record with 'test the system' there would be a '1' in the count column.
Also for more context I wont know what the user will input into the requirement so I can't compare to pre-determined strings.
I'm new to stack overflow I am hoping I have explained enough and its not too confusing.
The following extract:
SELECT (SELECT COUNT(*)
FROM DIA_ASSOCIATED_QMS_DOCUMENTS
WHERE REQUIREMENT = :P33_REQUIREMENT group by REQUIREMENT ) AS COUNT
Could be replaced by
SELECT (SELECT COUNT(*)
FROM DIA_ASSOCIATED_QMS_DOCUMENTS
WHERE REQUIREMENT = DAQD.REQUIREMENT ) AS COUNT
Which would give you - for each line, the number of requirements that are identical.
I'm not completely certain it is what you are after, but if it isn't, it should give you some ideas on how to progress (or allow you to indicate where I failed to understand your request)
This is a relitively simple question regarding data loader. I'm currently running a query into our app that is pulling the 'last login' by a user for each account. As our app is not integrated with our SFDC I have to query the data, then manually upload the CSV file using data loader.
This particular field, 'Last Login', is on the account page. Long story short, the output of my query has some rows that will have the same account ID, but with different dates - most recent, and less recent. E.g. Two rows with same account ID. One 'Last Login' date is 7/30/18, and the other row (same account id) has a 'Last Login' date of 7/17/18.
See 'blue' delineated area.
Instead of manually deleting the row with the 'less recent' date, is there a way I can order the column in such a way (either descending or ascending) so that the field 'Last Login' field will populate with the 'most recent' date?
Essentially, if the record is the same, what is the order in which the org will ingest the data?
Thanks for your help!
-M
Data is inserted/updated in the order in which it appears in source file.
If you have update file like that:
Id,Name
00170000015Uemk,Some Name
00170000015Uemk,Some Different Name
The last option will "win". Note that this is behavio(u)r of the API access. In Apex doing something like that will crash & burn:
update new List<Account>{
new Account(Id = '00170000015Uemk', Name = '1'),
new Account(Id = '00170000015Uemk', Name = '2')
};
// System.ListException: Duplicate id in list: 00170000015UemkAAC
If you want to do it quick & dirty see if SELECT ... FROM Account ORDER BY Id, LastLoginDate ASC helps. It should sort multiple rows for same account together, but then sort by date in ascending order so most recent should "win".
But this sounds like you have a business rule to never overwrite a newer date with older one. So a validation rule maybe to reject bad rows? Something like
!ISBLANK(Date__c) && PRIORVALUE(Date__c) > Date__c
Does anybody know how to fetch Calendar (calendar.event) meetings/events for particular date OR "from date to date" in ODOO ?
Till yet i have read meetings using meeting Ids as follow:
(
Sample DB Name,
1,
Password,
calendar.event,
read,
(
125
)
)
In above input parameter 125 is my meeting Id. So i get records of that particular meeting id. But now i want the meetings record based on dates.
So how can i achieve this ? What will be the inputs for this ?
You are accessing the Odoo External API.
The read method is to be used when you know the id for the records to fetch.
To get records based on a condition you should use search_read and pass it a domain expression instead of the record ids.
As an illustration, the domain you nedd could look like:
[['start_datetime', '>=', '2015-04-29 00:00:00'],
['start_datetime', '<', '2015-04-30 00:00:00']]