I am new in this api which is MediaWiki web service API.
I am trying to pull data in above table of this url: https://lessonslearned.em.se.com/lessons/Main_Page
I'm testing it using its sandbox: https://lessonslearned.em.se.com/lessons/Special:ApiSandbox
Now, I can view the source of the page for example:
{{#ask:[[Category:Lesson]][[Has region::NAM]][[Creation date::>{{#time: r | -1 year}}]]|format=count}}
Above code should return lessons from NAM 12 months ago which has a value of 1 based on the table. But I am getting an error in this part [[Creation date::>{{#time: r | -1 year}}]].
Error message:{ "error": { "query": [ "\"Thu, 31 Mar 2022 03:50:49 +0000 -1 03202233103\" contains an extrinsic dash or other characters that are invalid for a date interpretation." ] } }
I tried to break the code in each part and found out that the root cause is the hyphen(-) in -1 year. I also check the documentation of the time parser function(https://www.mediawiki.org/wiki/Help:Extension:ParserFunctions##time) and they have samples similar to this, But in my case it is not working.
Hope someone can give me at least reference for this problem. Thanks!
Related
When calling GetReportsListAsync from my Xero App for a Tennant I'm getting the InnerException "Sequence contains more than one matching element".
I used Xero's API Explorer to call the same Xero Accounting API > Reports Endpoint > Get Reports List Operation for the same Tennant and can see that there is a double-up of the same GST Calculation (ReportType) Activity Statement (ReportName) for the same period of 1 Jan 2022 to 31 Jan 2022 (ReportDate).
How can there be 2 of the same reports for the same Tennant for the same period? Which is the correct one so I can take the ReportID to then call GetReportFromId?
Any help greatly appreciated.
Wanted to know if this is possible, I have 2 APIs I am testing.
API 1. Gives a list of total jobs posted by the user.
Response =
"jobId": 15596, "jobTitle": "PHP developer"
API 2. Gives the following response.
"total CVs": 19, "0-7days": 12,"status": "New Resume"
meaning in bucket New resume we have a total of 19 CVs and in 19 Cvs 12 Cvs have an aging of 12 days. This response basically related to the jobs posted.
When i Hit the API i am getting the correct numbers but on front end the API 1 will be used as dropdown to select the jobs and then New Resume, ageing and total Cvs will be shown according to that jobs.
I wanted is it possible to test the two API's togther sort of using filter like on front end or the only way to test is to check if the response i am getting is correct.
I have created Google query job with my own job id successfully. And able to use the job id again and got successful results yesterday.
But the same Job id not working fine.
I have tried the job id with project Id in the google bigquery UI got the same error as '404' below
{
"error": {
"errors": [
{
"domain": "global",
"reason": "notFound",
"message": "Not Found: Table<projectId:some random generated String.some random generated String>"
}
],
"code": 404,
"message": "Not Found: Table <projectId:some random generated String.some random generated String>
}
}
Please help me, is there any life time for Job? Or any specific Configuration required while creating Job Id to work permanently?
I am using Java API of Google Bigquery to do the above implementation. Find the logic used below:
Query Job Creation logic:
String query = "SELECT count(*) AS TOTAL_RECORDS FROM :dataset.:tablename;"
String expectedJobId = "someuniqueString";
JobConfigurationQuery queryConfig = new JobConfigurationQuery()
.setQuery(query);
queryConfig.setUseQueryCache(true);
JobConfiguration config = new JobConfiguration()
.setQuery(queryConfig);
JobReference jobReference = new JobReference();
jobReference.setJobId(expectedJobId);
jobReference.setProjectId(PROJECT_ID);
Job job = new Job().setId(expectedJobId).setConfiguration(
config);
job.setJobReference(jobReference);
job = bigqueryService.jobs()
.insert(PROJECT_ID, job).execute();
Results Retrieved using above JobId :
GetQueryResults queryRequest = bigqueryService.jobs()
.getQueryResults(PROJECT_ID, expectedJobId);
GetQueryResultsResponse queryResponse = queryRequest.execute();
The error you're seeing is not a problem looking up the job but looking up the result table. As you noticed, getQueryResults() for a particular job will only work for up to 24 hours; after that the table that is created to store the results will expire and get cleaned up.
If you find that this happens within the 24 hour window, you might want to check to make sure the job actually completed successfully. You can use bigqueryService.jobs.get() to look up the job status.
If that doesn't help, if you send the job id we (the BigQuery team) can look up in the server logs what is going on with that job.
Sometimes the problem is incorrect dataset location. In my code I have a config based on which I set the dataset location while executing a query. I messed up the location and started getting this error. After 2 hours of debugging finally found the issue.
Corrected the dataset location and it worked fine.
I try to get information from Yodlee API.
I have a test user where I've implemented adding an account and I got refresh OK from the site:
{ siteRefreshStatus: {
siteRefreshStatusId: 8
siteRefreshStatus: "REFRESH_COMPLETED_WITH_UNCERTAIN_ACCOUNT"
}
- siteRefreshMode: {
refreshModeId: 2
refreshMode: "NORMAL"
}
- updateInitTime: 0
nextUpdate: 1391603301
code: 403
noOfRetry: 0
}
}
Now when I try to perform search and get the actual transactions I get this error:
{
errorOccured: "true"
exceptionType: "com.yodlee.core.IllegalArgumentValueException"
refrenceCode: "_57c250a9-71e8-4d4b-830d-0f51a4811516"
message: "Invalid argument value: Container type cannot be null"
}
The problem is that I have container type!
Check out the parameters I send:
cobSessionToken=08062013_2%3Ad02590d4474591e507129bf6baaa58e81cd9eaacb5753e9441cd0b1ca3b8bd00a3e6b6a943956e947458307c1bb94b505e2eb4398f890040a3db8c98606c0392&userSessionToken=08062013_0%3A8e8ef9dd4f294e0f16dedf98c1794b96bf33f2e1f2686eda2f35dfe4901dd3a871eed6d08ce52c99a74deb004c025ebf4bf94c7b17baf8ba18aacb331588f5f5&transactionSearchRequest.containerType=bank&transactionSearchRequest.higherFetchLimit=1000&transactionSearchRequest.lowerFetchLimit=1&transactionSearchRequest.resultRange.endNumber=500&transactionSearchRequest.resultRange.startNumber=1&transactionSearchRequest.searchClients.clientId=1&transactionSearchRequest.searchClients.clientName=DataSearchService&transactionSearchRequest.ignoreUserInput=true&transactionSearchRequest.searchFilter.currencyCode=USD&transactionSearchRequest.searchFilter.postDateRange.fromDate=01-01-2014&transactionSearchRequest.searchFilter.postDateRange.toDate=01-31-2014&transactionSearchRequest.searchFilter+.transactionSplitType=ALL_TRANSACTION&transactionSearchRequest.searchFilter.itemAccountId+.identifier=10008425&transactionSearchRequest.searchClients=DEFAULT_SERVICE_CLIENT
There is an error occurred while adding the account, which can be interpreted by this parameter code: 403 and hence you will not be seeing that account when you call the getItemSummary API. An account is successfully linked if the code has zero as value. E.g.code:0 . 403 is an error which is show if Yodlee's data agent has encountered an unhandled use case. Hence for any such error you should file a service request using Yodlee customer care tool.
To know more about error codes please visit -
https://developer.yodlee.com/FAQs/Error_Codes
The status is show as completedsiteRefreshStatus: "REFRESH_COMPLETED_WITH_UNCERTAIN_ACCOUNT"because addition of any account is followed by a refresh in which Yodlee's data agent logs into the websites of FIs and try scraping data. Hence completion of this activity is denoted as REFRESH_COMPLETED even when there is an error occurred.
TranasctionSearch issue -
I can see two of the parameters with a "+" sign. Since transactionSlipttype and containerType are dependent on each other the error is thrown.
&transactionSearchRequest.searchFilter+.transactionSplitType=ALL_TRANSACTION
&transactionSearchRequest.searchFilter.itemAccountId+.identifier=10008425
The right parameters are -
&transactionSearchRequest.searchFilter.transactionSplitType=ALL_TRANSACTION
&transactionSearchRequest.searchFilter.itemAccountId.identifier=10008425
When using the Omniture Data Warehouse API Explorer ( https://developer.omniture.com/en_US/get-started/api-explorer#DataWarehouse.Request ), the following request provides an 'Date_Granularity is invalid response'. Does anyone have experience with this? The API documentation ( https://developer.omniture.com/en_US/documentation/data-warehouse/pdf ), states that the following values are acceptable: "none, hour, day, week, month, quarter, year."
{
"Breakdown_List":[
"evar14",
"ip",
"evar64",
"evar65",
"prop63",
"evar6",
"evar16"
],
"Contact_Name":"[hidden]",
"Contact_Phone":"[hidden]",
"Date_From":"12/01/11",
"Date_To":"12/14/11",
"Date_Type":"range",
"Email_Subject":"[hidden]",
"Email_To":"[hidden]",
"FTP_Dir":"/",
"FTP_Host":"[hidden]",
"FTP_Password":"[hidden]",
"FTP_Port":"21",
"FTP_UserName":"[hidden]",
"File_Name":"test-report",
"Metric_List":[ ],
"Report_Name":"test-report",
"rsid":"[hidden]",
"Date_Granularity":"hour",
}
Response:
{
"errors":[
"Date_Granularity is invalid."
]
}
Old question, just noticing it now.
Data Warehouse did not support the Hour granularity correctly until Jan 2013 (the error you saw was a symptom of this). Then it was corrected for date ranges less then 14 days. In the July 2013 maintenance release of v15 the 14 day limit should be gone. But I have not verified that myself.
As always the more data you request the longer the DW processing will take. So I recommend keeping ranges to a maximum of a month and uncompressed file sizes to under a 1GB, though I hear 2 GB should now be supported.
If you still have issues please let us know.
Thanks C.