Error in if (all(x > 0)) { : missing value where TRUE/FALSE needed in glMDPlot - error-handling

I am analyzing RNA sequencing data with 4 treatments. During the process in r, I got an error below with the code. It is to make a glimma MDplot. It would be appreciated that anyone solve this problem.
All treatments include 4 replicates, but one of them has only 3. I am unsure that this exerts the error, but just in case, I informed.
glMDPlot(fit.trend2, counts = logCPM.trend, status = trend_global_0.1, coef = 1, groups = d.filt$samples$group, samples = d.filt$samples$Label, sample.cols = d.filt$samples$col, folder = "glimma-plots", html = "HvsC_trend_globalFDR", main = "H. vs. C.", launch = TRUE)
Error in if (all(x > 0)) { : missing value where TRUE/FALSE needed

Related

SAP Business One - Add BP (DI Error: (-5002)

I am trying to add new addresses into a BP. If there is an address registered in the BP everything works, but now if it is new addresses return the error -5002 - Error updating BP: [OCRD.State2], 'the linked value' SP 'does not exist'
I test with SAP Business 10 (10.00.140) FP 2011
if (oBP.GetByKey(CardCode))
{
oBP.Addresses.SetCurrentLine(oBP.Addresses.Count - 1);
if (!string.IsNullOrEmpty(oBP.Addresses.AddressName))
{
oBP.Addresses.Add();
}
UF = json.data.endereco_uf;
if (UF.Length > 2)
{
UF = "";
}
oBP.Addresses.SetCurrentLine(oBP.Addresses.Count - 1);
oBP.Addresses.AddressName = "Novo 1";
oBP.Addresses.AddressType = BoAddressType.bo_ShipTo;
oBP.Addresses.Street = json.data.endereco_logradouro;
oBP.Addresses.Block = json.data.endereco_bairro;
oBP.Addresses.ZipCode = json.data.endereco_cep;
oBP.Addresses.City = json.data.endereco_municipio;
oBP.Addresses.State = UF;
oBP.Addresses.County = county;
oBP.Addresses.StreetNo = json.data.endereco_numero;
oBP.Addresses.BuildingFloorRoom = json.data.endereco_complemento;
oBP.Addresses.Add();
oBP.Addresses.SetCurrentLine(oBP.Addresses.Count - 1);
oBP.Addresses.AddressName = "Novo 2";
oBP.Addresses.AddressType = BoAddressType.bo_BillTo;
oBP.Addresses.Street = json.data.endereco_logradouro;
oBP.Addresses.Block = json.data.endereco_bairro;
oBP.Addresses.ZipCode = json.data.endereco_cep;
oBP.Addresses.City = json.data.endereco_municipio;
oBP.Addresses.State = UF;
oBP.Addresses.County = county;
oBP.Addresses.StreetNo = json.data.endereco_numero;
oBP.Addresses.BuildingFloorRoom = json.data.endereco_complemento;
oBP.Addresses.Add();
int iRetVal = oBP.Update();
if (iRetVal != 0)
{
Program.oApplication.StatusBar.SetText("Error updating BP: " + Program.oCompany.GetLastErrorDescription(), BoMessageTime.bmt_Short, BoStatusBarMessageType.smt_Error);
return false;
}
SAP helped me to solve this issue. The answer follows.
I tried to reproduce this issue in the DemoUK (GB Localization) database and was able to reproduce as well.
After the investigation, it was found that you cannot link a state which does not exist for the specific country.
In your case, the system throws the error because the data is validated based on the following query:
SELECT* FROM OCST WHERE "Code" = 'SP' and "Country" = ''
Similarly, if I will try to set the State as 'Arizona' and Country as 'United Kingdom', it will give me the same error because the correct 'Country' value should be 'USA'.
Therefore, in order to resolve this issue, you need to opt out one of the following:
Set the Country property also in the DI Code (State should belong to the Country which you are trying to set):
oBP.Addresses.Country = "GB"
Remove the State property from the DI Code:
oBP.Addresses.State = "SP"

How to split a field into 10 new fields using Substring command in sql

I have a field that I title, nothing. As of yet it has no value. It is 261 characters at the end of my fixed width file, largefile. Now, I am being told to break this 261 character field into 10 separate fields. I can reimport it using this new schema. I found something else on this site I found something else on another site and it makes sense but it seems as if I am missing a few tidbits of code. Any thoughts on if I am going about this the right way?
I have tried the following code but ending in error.
update dbo.largefile
set blank1 = substring(nothing,1,9)
unkn1 = substring(nothing,10,1)
unkn2 = substring(nothing,11,1)
blank2 = substring(nothing,12,35)
unkn3 = substring(nothing,47,4)
unkn4 = substring(nothing,51,1)
contact = substring(nothing,52,35)
title = substring(nothing,87,35)
contactphone = substring(nothing,122,10)
website = substring(nothing,132,204)
unkn5 = substring(nothing,203,59);
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near 'unkn1'.
You are missing commas after each assignment:
update dbo.largefile
set blank1 = substring(nothing,1,9),
unkn1 = substring(nothing,10,1),
unkn2 = substring(nothing,11,1),
blank2 = substring(nothing,12,35),
unkn3 = substring(nothing,47,4),
unkn4 = substring(nothing,51,1),
contact = substring(nothing,52,35),
title = substring(nothing,87,35),
contactphone = substring(nothing,122,10),
website = substring(nothing,132,204),
unkn5 = substring(nothing,203,59);

Sales Order Confirmation Report - SalesConfirmDP

I am modifying the SalesConfirmDP class and trying to add the CustVendExternalItem.ExternalItemTxt field into a new field I have created.
I have tried a couple of things but I do not think my syntax was correct i.e I declare the CustVendExternalItem table in the class declaration. But then when I try to insert CustVendExternalItem.ExternalItemTxt into my new field, it does not populate, I guess there must be a method which I need to include?
If anyone has any suggestion it would be highly appreciated.
Thank you in advance.
private void setSalesConfirmDetailsTmp(NoYes _confirmTransOrTaxTrans)
{
DocuRefSearch docuRefSearch;
// Body
salesConfirmTmp.JournalRecId = custConfirmJour.RecId;
if(_confirmTransOrTaxTrans == NoYes::Yes)
{
if (printLineHeader)
{
salesConfirmTmp.LineHeader = custConfirmTrans.LineHeader;
}
else
{
salesConfirmTmp.LineHeader = '';
}
salesConfirmTmp.ItemId = this.itemId();
salesConfirmTmp.Name = custConfirmTrans.Name;
salesConfirmTmp.Qty = custConfirmTrans.Qty;
salesConfirmTmp.SalesUnitTxt = custConfirmTrans.salesUnitTxt();
salesConfirmTmp.SalesPrice = custConfirmTrans.SalesPrice;
salesConfirmTmp.DlvDate = custConfirmTrans.DlvDate;
salesConfirmTmp.DiscPercent = custConfirmTrans.DiscPercent;
salesConfirmTmp.DiscAmount = custConfirmTrans.DiscAmount;
salesConfirmTmp.LineAmount = custConfirmTrans.LineAmount;
salesConfirmTmp.CurrencyCode = custConfirmJour.CurrencyCode;
salesConfirmTmp.PrintCode = custConfirmTrans.TaxWriteCode;
if (pdsCWEnabled)
{
salesConfirmTmp.PdsCWUnitId = custConfirmTrans.pdsCWUnitId();
salesConfirmTmp.PdsCWQty = custConfirmTrans.PdsCWQty;
}
**salesConfirmTmp.ExternalItemText = CustVendExternalItem.ExternalItemTxt;**
if ((custFormletterDocument.DocuOnConfirm == DocuOnFormular::Line)
|| (custFormletterDocument.DocuOnConfirm == DocuOnFormular::All))
{
docuRefSearch = DocuRefSearch::newTypeIdAndRestriction(custConfirmTrans,
custFormletterDocument.DocuTypeConfirm,
DocuRestriction::External);
salesConfirmTmp.Notes = Docu::concatDocuRefNotes(docuRefSearch);
}
salesConfirmTmp.InventDimPrint = this.printDimHistory();
Well, AX cannot guess which record you need, there is a helper class CustVendExternalItemDescription to deal with it:
boolean found;
str externalItemId;
...
[found, externalItemId, salesConfirmTmp.ExternalItemText] = CustVendExternalItemDescription::findExternalItemDescription(
ModuleCustVend::Cust,
custConfirmTrans.ItemId,
custConfirmTrans.inventDim(),
custConfirmJour.OrderAccount,
CustTable::find(custConfirmJour.OrderAccount).CustItemGroupId);
The findExternalItemDescription method returns more information than you need here, but you have to define variables to store it anyway.
Well, the steps to solve this problem are fairly easy and i will try to give you a step by step approach how to solve this problem.
1) Are you initialising CustVendExternalItem properly? Make a record of the same and initialise it as Jan has shown above, then debug your code and see if the value is being initialised in your DP class.
2)If your value is being initialised correctly, but it is not showing up in the report design there can be multiple issues such as:
Overlapping of text boxes.
Insufficient space for the given field
Some report parameter/property not being set correctly which causes
your value not to show up on the report.
Check these one by one and you should end up arriving towards a solution

Salesforce Apex: Error ORA-01460

I've developed an apex API on salesforce which performs a SOQL on a list of CSV data. It has been working smoothly until yesterday, after making a few changes to code that follow the SOQL query, I started getting a strange 500 error:
[{"errorCode":"APEX_ERROR","message":"System.UnexpectedException:
common.exception.SfdcSqlException: ORA-01460: unimplemented or
unreasonable conversion requested\n\n\nselect /SampledPrequery/
sum(term0) \"cnt0\",\nsum(term1) \"cnt1\",\ncount(*)
\"totalcount\",\nsum(term0 * term1) \"combined\"\nfrom (select /*+
ordered use_nl(t_c1) /\n(case when (t_c1.deleted = '0') then 1 else 0
end) term0,\n(case when (upper(t_c1.val18) = ?) then 1 else 0 end)
term1\nfrom (select /+ index(sampleTab AKENTITY_SAMPLE)
*/\nentity_id\nfrom core.entity_sample sampleTab\nwhere organization_id = '00Dq0000000AMfz'\nand key_prefix = ?\nand rownum <=
?) sampleTab,\ncore.custom_entity_data t_c1\nwhere
t_c1.organization_id = '00Dq0000000AMfz'\nand t_c1.key_prefix = ?\nand
sampleTab.entity_id =
t_c1.custom_entity_data_id)\n\nClass.labFlows.queryContacts: line 13,
column 1\nClass.labFlows.fhaQuery: line 6, column
1\nClass.zAPI.doPost: line 10, column 1"}]
the zAPI.doPost() is simply our router class which takes in the post payload as well as the requested operation. It then calls whatever function the operation requests. In this case, the call is to labFlows.queryContacts():
Public static Map<string,List<string>> queryContacts(string[] stringArray){
//First get the id to get to the associative entity, Contact_Deals__c id
List<Contact_Deals__c> dealQuery = [SELECT id, Deal__r.id, Deal__r.FHA_Number__c, Deal__r.Name, Deal__r.Owner.Name
FROM Contact_Deals__c
Where Deal__r.FHA_Number__c in :stringArray];
//Using the id in the associative entity, grab the contact information
List<Contact_Deals__c> contactQuery = [Select Contact__r.Name, Contact__r.Id, Contact__r.Owner.Name, Contact__r.Owner.Id, Contact__r.Rule_Class__c, Contact__r.Primary_Borrower_Y_N__c
FROM contact_deals__c
WHERE Id in :dealQuery];
//Grab all deal id's
Map<string,List<string>> result = new Map<string,List<string>>();
for(Contact_Deals__c i:dealQuery){
List<string> temp = new list<string>();
temp.add(i.Deal__r.Id);
temp.add(i.Deal__r.Owner.Name);
temp.add(i.Deal__r.FHA_Number__c);
temp.add(i.Deal__r.Name);
for(Contact_Deals__c j:contactQuery){
if(j.id == i.id){
//This doesn't really help if there are multiple primary borrowers on a deal - but that should be a SF worflow rule IMO
if(j.Contact__r.Primary_Borrower_Y_N__c == 'Yes'){
temp.add(j.Contact__r.Owner.Id);
temp.add(j.Contact__r.Id);
temp.add(j.Contact__r.Name);
temp.add(j.Contact__r.Owner.Name);
temp.add(j.Contact__r.Rule_Class__c);
break;
}
}
}
result.put(i.Deal__r.id, temp);
}
return result;
}
The only thing I've changed is moving the temp list to add elements before the inner-loop (previously temp would only capture things from the inner-loop). The error above is referencing line 13, which is specifically the first SOQL call:
List<Contact_Deals__c> dealQuery = [SELECT id, Deal__r.id, Deal__r.FHA_Number__c, Deal__r.Name, Deal__r.Owner.Name
FROM Contact_Deals__c
Where Deal__r.FHA_Number__c in :stringArray];
I've tested this function in the apex anonymous window and it worked perfectly:
string a = '00035398,00035401';
string result = zAPI.doPost(a, 'fhaQuery');
system.debug(result);
Results:
13:36:54:947 USER_DEBUG
[5]|DEBUG|{"a09d000000HRvBAD":["a09d000000HRvBAD","Contacta","11111111","Plaza
Center
Apts"],"a09d000000HsVAD":["a09d000000HsVAD","Contactb","22222222","The
Garden"]}
So this is working. The next part is maybe looking at my python script that is calling the API,
def origQuery(file_name, operation):
csv_text = ""
with open(file_name) as csvfile:
reader = csv.reader(csvfile, dialect='excel')
for row in reader:
csv_text += row[0]+','
csv_text = csv_text[:-1]
data = json.dumps({
'data' : csv_text,
'operation' : operation
})
results = requests.post(url, headers=headers, data=data)
print results.text
origQuery('myfile.csv', 'fhaQuery')
I've tried looking up this ORA-01460 apex error, but I can't find anything that will help me fix this issue.
Can any one shed ore light on what this error is telling me?
Thank you all so much!
It turns out the error was in the PY script. For some reason the following code isn't functioning as it is supposed to:
with open(file_name) as csvfile:
reader = csv.reader(csvfile, dialect='excel')
for row in reader:
csv_text += row[0]+','
csv_text = csv_text[:-1]
This was returning one very long string that had zero delimiters. The final line in the code was cutting off the delimiter. What I needed instead was:
with open(file_name) as csvfile:
reader = csv.reader(csvfile, dialect='excel')
for row in reader:
csv_text += row[0]+','
csv_text = csv_text[:-1]
Which would cut off the final ','
The error was occurring because the single long string was above 4,000 characters.

Matlab: Optimize this (pt 2)

Here's another one:
ValidFirings = ((DwellTimes > 30/(24*60*60)) | (GroupCount > 1));
for i = length(ValidFirings):-1:2
if(~ValidFirings(i))
DwellTimes(i-1) = DwellTimes(i)+DwellTimes(i-1);
GroupCount(i-1) = GroupCount(i)+GroupCount(i-1);
DwellTimes(i) = [];
GroupCount(i) = [];
ReducedWallTime(i) = [];
ReducedWallId(i) = [];
end
end
It appears that the intent is to sum up 'dwelltimes' based on whether or not the sensor firing is considered valid. So I have a vector of sensor firings that Im walking through backwards and summing into the previous row if the current row is not marked as valid.
I can visualize this in C/C++ but I don't know how to translate it into better Matlab vector notation. As it stands now, this loop is v slow.
EDIT:
Could I use some form of DwellTimes = DwellTimes( cumsum( ValidFirings ))?
As with your previous question, replacing the for loop should improve the performance.
%# Find the indices for invalid firings
idx = find(~(DwellTimes > 30/(24*60*60)) | (GroupCount > 1));
%# Index the appropriate elements and add them (start the addition
%# from the second element)
%# This eliminates the for loop
DwellTimes(idx(2:end)-1) = DwellTimes(idx(2:end)-1)+DwellTimes(idx(2:end));
GroupCount(idx(2:end)-1) = GroupCount(idx(2:end)-1)+GroupCount(idx(2:end));
%# Now remove all the unwanted elements (this removes the
%# first element if it was a bad firing. Modify as necessary)
GroupCount(idx)=[];
DwellTimes(idx)=[];
I would consolidate first as shown, then eliminate the invalid data. This avoids the constant resizing of the data. Note that you can't reverse the order of the FOR loop due to the way that the values propagate.
ValidFirings = ((DwellTimes > 30/(24*60*60)) | (GroupCount > 1));
for i = length(ValidFirings):-1:2
if (~ValidFirings(i))
DwellTimes(i-1) = DwellTimes(i) + DwellTimes(i-1);
GroupCount(i-1) = GroupCount(i) + GroupCount(i-1);
end
end
DwellTimes = DwellTimes(ValidFirings);
GroupCount = GroupCount(ValidFirings);
ReducedWallTime = ReducedWallTime(ValidFirings);
ReducedWallId = ReducedWallId(ValidFirings);