SFDC Data Loader on Account - Same Id, Different Field Values - sql

This is a relitively simple question regarding data loader. I'm currently running a query into our app that is pulling the 'last login' by a user for each account. As our app is not integrated with our SFDC I have to query the data, then manually upload the CSV file using data loader.
This particular field, 'Last Login', is on the account page. Long story short, the output of my query has some rows that will have the same account ID, but with different dates - most recent, and less recent. E.g. Two rows with same account ID. One 'Last Login' date is 7/30/18, and the other row (same account id) has a 'Last Login' date of 7/17/18.
See 'blue' delineated area.
Instead of manually deleting the row with the 'less recent' date, is there a way I can order the column in such a way (either descending or ascending) so that the field 'Last Login' field will populate with the 'most recent' date?
Essentially, if the record is the same, what is the order in which the org will ingest the data?
Thanks for your help!
-M

Data is inserted/updated in the order in which it appears in source file.
If you have update file like that:
Id,Name
00170000015Uemk,Some Name
00170000015Uemk,Some Different Name
The last option will "win". Note that this is behavio(u)r of the API access. In Apex doing something like that will crash & burn:
update new List<Account>{
new Account(Id = '00170000015Uemk', Name = '1'),
new Account(Id = '00170000015Uemk', Name = '2')
};
// System.ListException: Duplicate id in list: 00170000015UemkAAC
If you want to do it quick & dirty see if SELECT ... FROM Account ORDER BY Id, LastLoginDate ASC helps. It should sort multiple rows for same account together, but then sort by date in ascending order so most recent should "win".
But this sounds like you have a business rule to never overwrite a newer date with older one. So a validation rule maybe to reject bad rows? Something like
!ISBLANK(Date__c) && PRIORVALUE(Date__c) > Date__c

Related

REDCap SQL query filtering on Instances

In a REDCap (EAV table) project each record is a testing site.
Project is divided into two instruments. Instrument 1 will have information on the testing site (Address, DAG associated).
Instrument 2 is a repeatable instrument. Each instance will represent a date where testing is offered at that site.
I am trying to filter out sites using a sub query depending on the date testing is offer, i.e. the site will show on the list when we are between today and the testing date. I manage to filter out a whole record but I do not know how to filter only an instance of the record.
SELECT
value
FROM redcap_data
WHERE
project_id = 80
and
field_name = 'concat_site_date'
and
record in (
SELECT
record
FROM redcap_data
WHERE
project_id = 80
and
field_name ='date'
and
value >= date(now())
)
This filter out the record that has at least one instance where date >= date(now()) and shows both testing dates. However, one of the two instances is in the past and I wish to hide it. How best to add instances to filter in sql queries?
You want to know which sites have testing dates that are in the future, right?
I'd pull out the instance values that meet the time criterion (i.e. are in the future) and join that to a subquery that gives you the site-level data you want (i.e. fields form the non-repeating form):
select instancedata.record, sitefield1, sitefield2, instance, testingdate
from (
select record, coalesce(instance, 1) as instance, value as testingdate
from redcap_data
where project_id=12188
and field_name='testingdate'
and now() < value
group by project_id,record,event_id,instance
) instancedata
inner join (
select record
, group_concat(if(field_name='sitefield1',value,null)) as sitefield1
, group_concat(if(field_name='sitefield2',value,null)) as sitefield2
from redcap_data
where project_id=12188
group by project_id,event_id,record
) recdata
on instancedata.record=recdata.record

Set sequential number on entries with specific condition

Situation
I'm preparing the migration of user data and I have a list of user subscriptions, for which I try to give every user a member id.
We only want to migrate active subscriptions, which are identified with the value 1 in the row "active". The oldest user should get the lowest number and count upwards from there.
Problem
The oldest users already have renewed their subscription. Please have a look at this image:
If I order the dataset by date and set a sequential number from there, the oldest user doesn't get the lowest number since we only migrate active subscriptions. In the above image, the oldest user "one" should have the id 1 but gets the id 5 under my current setting.
Possible Solution
I'm struggling to find a solution to solve this problem. I was thinking about finding a way to write:
a) sort by date and set an ongoing member id
b) check for each user mail adress with active = 1, if there is already an entry existing with active = 0 and if yes, overwrite the member id.
afterwards, it should look like this:
Create sample dataset:
df = pd.DataFrame({
"member_id":[1,2,3,4,5,6],
"active":[0,0,1,1,1,1],
"date": ["Jan 2020","Feb 2020","Mar 2020","Apr 2020","Jan 2021","Feb 2021"],
"mail": ["one#user.com","two#user.com","three#user.com","four#user.com","one#user.com","two#user.com"]})
Then find unique users and their ids:
# Change date column to datetime
df["date"] = pd.to_datetime(df["date"])
# Sort rows by active and date columns
df = df.sort_values(by=["active","date"])
# Find unique users by order of appearance
users = df["mail"].unique()
# Find id of each user
users2id = {u:i+1 for i, u in enumerate(users)}
# Update id
df["member_id"] = df["mail"].apply(lambda u: users2id[u])

Displaying single date header about multiple rows (Recycleview)

Evening everyone
I've currently got a simple recycle view adapter which is being populated by an SQL Lite database. The user can add information into the database from the app which then build a row inside of the recycle view. When you run the application it will display each row with its own date directly above it. I'm now looking to make the application look more professional by only displaying a single date above multiple records as a header.
So far I've built 2 custom designs, one which displays the header along with the row and the other which is just a standard row without a header built in. I also understand how to implement two layouts into a single adapter.
I've also incorporated a single row into my database which simply stores the date in a way in which I can order the database e.g. 20190101
Now my key question is when populating the adapter using the information from the SQL Lite database how can get it to check if the previous record has the same date. If the record has the same date then it doesn't need to show the custom row with header but if its a new date then it does?
Thank you
/////////////////////////////////////////////////////////////////////////////
Follow up question for Krokodilko, I've spent the last hour trying to work your implementation into my SQL Lite but still not being able to find the combination.
below the is the original code SQL Lite line I currently use to simply gain all the results.
Cursor cursor = sqLiteDatabase.rawQuery("SELECT * FROM " + Primary_Table + " " , null);
First you must define an order which will be used to determine which record is previous and which one is next. As I understand, you are simply using date column.
Then the query is simple - use LAG analytic function to pick a column value from previous row, here is a link to a simple demo (click "Run" button):
https://sqliteonline.com/#fiddle-5c323b7a7184cjmyjql6c9jh
DROP TABLE IF EXISTS d;
CREATE TABLE d(
d date
);
insert into d values ( '2012-01-22'),( '2012-01-22'),( '2015-01-22');
SELECT *,
lag( d ) OVER (order by d ) as prev_date,
CASE WHEN d = lag( d ) OVER (order by d )
THEN 'Previous row has the same date'
ELSE 'Previous row has different date'
END as Compare_status
FROM d
ORDER BY d;
In the above demo d column is used in OVER (order by d ) clause to determine the order of rows used by LAG function.

How to count unique occurences of string in table for separate records in apex 5

I am trying to automatically count the unique occurrences of a string saved in the table. Currently I have a count of a string but only when a user selects the string and it gives every record the same count value.
For example
Below is a image of my current table:
From the image you can see that there is a Requirement column and a count column. I have got it to the point were when the user would select a requirement record (each requirement record has a link) it would insert the requirement text into a requirement item called 'P33_REQUIREMENT' so the count can have a value to compare to.
This is the SQL that I have at current:
SELECT (SELECT COUNT(*)
FROM DIA_ASSOCIATED_QMS_DOCUMENTS
WHERE REQUIREMENT = :P33_REQUIREMENT
group by REQUIREMENT
) AS COUNT,
DPD.DIA_SELECTED,
DPD.Q_NUMBER_SELECTED,
DPD.SECTION_SELECTED,
DPD.ASSIGNED_TO_PERSON,
DAQD.REFERENCE,
DAQD.REQUIREMENT,
DAQD.PROGRESS,
DAQD.ACTION_DUE_DATE,
DAQD.COMPLETION_DATE,
DAQD.DIA_REF,
DA.DIA,
DA.ORG_RISK_SCORE
FROM DIA_PROPOSED_DETAIL DPD,
DIA_ASSOCIATED_QMS_DOCUMENTS DAQD,
DIA_ASSESSMENTS DA
WHERE DPD.DIA_SELECTED = DAQD.DIA_REF
AND DPD.DIA_SELECTED = DA.DIA
This is the sql used to make the table in the image.
This issue with this is, it is giving every record the same count when the user selects a requirement value. I can kind of fix this by also adding in AND DIA_SELECTED = :P33_DIA into the where clause of the count. DIA_SELECTED being the first column in the table and :P33_DIA being the item that stores the DIA ref number relating to the record chosen.
The output of this looks like:
As you can see there is only one count. Still doesn't fix the entire issue but a bit better.
So to sum up is there a way to have the count, count the occurrences individually and insert them in the requirements that are the same. So if there are three tests like in the images there would be a '3' in the count column where requirement = 'test', and if there is one record with 'test the system' there would be a '1' in the count column.
Also for more context I wont know what the user will input into the requirement so I can't compare to pre-determined strings.
I'm new to stack overflow I am hoping I have explained enough and its not too confusing.
The following extract:
SELECT (SELECT COUNT(*)
FROM DIA_ASSOCIATED_QMS_DOCUMENTS
WHERE REQUIREMENT = :P33_REQUIREMENT group by REQUIREMENT ) AS COUNT
Could be replaced by
SELECT (SELECT COUNT(*)
FROM DIA_ASSOCIATED_QMS_DOCUMENTS
WHERE REQUIREMENT = DAQD.REQUIREMENT ) AS COUNT
Which would give you - for each line, the number of requirements that are identical.
I'm not completely certain it is what you are after, but if it isn't, it should give you some ideas on how to progress (or allow you to indicate where I failed to understand your request)

Return Files w/o Note in at least X Days

We have a customer database and I'm trying to get all of the customer files that haven't had a comment on them in three weeks or more.
I'm working with two tables; the file table with info about the file like which staff is assigned to it, and the comments table, where all the comments in the database are. They're linked by the file number field.
If I wanted the file number and date of last note, what SQL should I be using?
I have tried:
SELECT db_file_notes.file_num, Last(db_file_notes.file_date) AS
LastOfnote_date, Last(db_file_notes.note_key) AS LastOfnote_key
FROM db_file_notes
GROUP BY db_file_notes.file_num
ORDER BY db_file_notes.file_num, Last(db_file_notes.note_date);
There are a handful of files that are on the resulting query that shouldn't be. For example, file # 212720's last note was on 7/28, but the above query returns a last note date of 6/26 (the previous last note). Then there's file # 212781 with actual last note on 7/21, but the query is return 6/12 (there five newer notes since the one returned by the query).
There's no date criteria in the above SQL but if I add the <=Date()-21 it's still incorrect (212710 is still there with a last note of 6/26). Interestingly, if I add a filter on the file number to only return a single file number like 212720, the last note date returns correctly.
I've tried sorting by file number then note date, and file number and note key (on the general assumption that newer notes have higher key values) and get the same behavior. Instead of sorting ascending then taking the last record, I've tried sorting descending and taking the first; this returns the correct note for the files affected by the above, but then new cases have the problem but in reverse now.
Sample table:
Without bringing in the second table you could use either of these:
This will return the top grouped date. For example, if your maximum file_date is 12/06/2017 this will return all the records that have a file_date of 12/06/2017.
SELECT TOP 1 file_num
, file_date
, note_key
FROM db_file_notes
WHERE DATEDIFF("d",file_date,DATE())>=21
ORDER BY file_date DESC
This code, on the other hand, will return the maximum date for each group of figures (a group being sets of file_num and note_key)
SELECT file_num
, MAX(file_date) AS Max_File_date
, note_key
FROM db_file_notes
WHERE DATEDIFF("d",file_date,DATE())>=21
GROUP BY file_num, note_key
Note: You don't have to qualify each field name with the table name if the field is unique across all tables (or you're just using one table).